New Article in Political Behavior, with Replication Files

My article with Josh Tucker and Ted Brader, “Cross-Pressure Scores: An Individual-Level Measure of Cumulative Partisan Pressures Arising from Social Group Memberships”, has just been printed in Political Behavior. I also created a GitHub repo for replication code and data to reproduce the results in the paper’s figures and tables. (You can also download a zip file containing all of the same files.) Putting those files together involved piecing together code from a variety of analyses conducted since we started the project in 2008, so please let me know if any of the files don’t work or deliver results that don’t line up with those in the paper.

One of the biggest challenges in putting this data together was to condense code written over a long period of time and by multiple authors (I wrote the original code for our analyses, but Josh and Ted made their own modifications as well) to make it straightforward for others to reproduce the results in our paper. There were plenty of other things I had to take out to avoid confusion, and I wouldn’t be surprised if there were a few bugs in this initial release that I’ll need to remedy (a major reason why I’m hosting the files on GitHub instead of a static site). Gathering replication files was a very informative exercise, and I’d highly recommend that others do it with their own work, despite the frustration that can be involved. Aside from the logistics of gathering the right code from among the many versions we produced along the way, there were also several changes I would have made if I were conducting the analysis now. I had barely finished my second year of grad school when we started, and my programming skills have grown immensely since then, so some of the steps I took in that paper are things I wouldn’t do now for a project like this.

Most galling to me is the randomness in the hotdecking procedure I used for imputing lightly-missing data. The basic way I handle missing data in this paper is to hotdeck variables with few missing values, so that these now-complete variables can then be used as predictors in multiply imputing other variables with more serious missingness. (If the latter variables were continuous, I could of course just impute them all simultaneously, but nearly all of the variables used in these analyses were categorical and thus required complete data for modeling.) I still think that’s a sound practical approach to dealing with missing data—though not as common in political science as elsewhere, hotdecking is a popular approach in survey research more generally and has attractive statistical properties—but this particular implementation is problematic because of the use of the hotdeckvars package in Stata. As far as I can tell (and I invite readers to correct me if I’ve overlooked something), using the “set seed” command in Stata doesn’t affect the randomization in the hotdecking process, and as such the results aren’t consistent from one run of the code to the next (as they would be using built-in methods that use seed values).

This isn’t a major issue, given that the variation in the final results is negligible—if it weren’t, that would mean our whole imputation strategy is flawed—and I still use that package on occasion when doing quick, one-off analyses. But it’s a real annoyance when prepping code for replication purposes, since it means that others can’t reproduce the exact same results in the paper. If I were to do this analysis again now, I’d write my own code for the hotdecking process, making each run of the code consistent given the same set seed. So, lesson learned!

(I’ve debated writing up my own Stata package to do reproducible hotdecking, but haven’t done so because my work is more often being done in R and Python these days. But feel free to get in touch if you think it would be useful—if there’s enough demand, it might be worth spending an afternoon on anyway.)

Finally, I should also note that there are other analyses referenced in the article’s text and supplemental online appendices that aren’t included in these files for the sake of brevity. If you’re particularly interested in some of that, though, let me know and I can probably get it for you. We may also be making a few small edits to the supplemental materials, based upon late-stage revisions to the paper itself, so I’ll post separately about that if it happens.

Checking in, Late 2013 Edition

So once again, it’s been a while since I last posted. What have I been up to? Well, to start, this came out in the spring:

And then early September, this happened:





And at the end of it, I got these:


So I’m now in DC for the foreseeable future, doing very interesting things with very obscene quantities of data. I have a few invited talks and conference presentations coming up, so hopefully sometime soon I’ll be able to share some of those materials on here as well.

An Update, Six Months Later

It’s been a tumultuous six months since I last posted, and while I won’t document everything that’s transpired, the end result is that I moved to Los Angeles after my fellowship at Vanderbilt ended and am now working in so-called “real politics”, doing microtargeting and other kinds of applied research. This is hardly where I’d expected to be as of this time last year, but that’s how it played out, and I’m certainly enjoying many aspects of life outside the ivory tower. I’m still working on my own research whenever I can, though, and will even be heading to New Orleans for APSA next week to present some new work. Going forward, I’m hoping to use this site as a home for my research as it develops, and I’ll be sure to let you know how things go in the months ahead.