I am grateful to ISQ for facilitating this symposium, and to Philipp Bleek, Matthew Fuhrmann, Rupal Mehta, Todd Sechser, and Etel Solingen and Joshua Malnight for participating. It is a pleasure to share a (virtual) platform with such accomplished scholars and to have them engage so constructively with my work.
In my article, “Examining Explanations for Nuclear Proliferation,” I sought to evaluate the quantitative literature on the causes of nuclear proliferation. That literature collectively identifies a large number of variables as statistically significant determinants of proliferation. But it does not provide us with a good understanding of the relative strength of these different variables in explaining proliferation, nor whether any of them might allow us to predict proliferation. Using three techniques—extreme bounds analysis, cross-validation, and random forests—I found that few variables provide strong explanations for proliferation or offer much in the way of predictive capacity. I thus concluded that the quantitative literature on the causes of proliferation has produced more tentative findings than scholars typically understand. We should be more careful about claiming that this literature has identified factors that drive, and predict, nuclear proliferation.
All of the contributors to this symposium raise important points and questions that deserve further consideration. I lack the space to address all of these points here, so I focus on a few that I consider to be the most pressing.
Mehta worries about a possible implication of my article: that academics should not seek to communicate with policymakers until we are certain about our conclusions. That was not my intended conclusion, and I suspect Mehta and I are actually in agreement. That is, I fully endorse academics engaging with policymakers, and do not think we should wait for absolute certainty before doing so (we will be waiting a long time if we do!). I do think, however, that we need to be careful that we communicate not only our findings, but their relative uncertainty.
I consider the accurate communication of uncertainty to be a core part of the scientific enterprise. Scholars—whatever methods they use—should not fear doing so. This is especially true when we study rare and complex phenomenon such as nuclear proliferation: what Robert Jervis calls “the strangeness of the nuclear world” may not prove at all easy to boil down into straightforward recommendations. In short, uncertainty in our findings is a sign of good rather than bad research. We should acknowledge and embrace that uncertainty. While policymakers may have an intuitive sense of these limitations and uncertainties, as Solingen and Malnight suggest, scholars also have a professional obligation to be clear about them.
Fuhrmann argues that the extreme bounds analysis I use may underestimate the effect of some variables because of post-treatment bias. For example, he contends that, in order to estimate the effect of rebel leadership on proliferation, we should not control for a series of measures of a state’s security environment—such as past military disputes or domestic unrest. Because past disputes or domestic unrest may be caused by having a rebel in office, including them in the analysis may dilute the effect of rebel leadership. Fuhrmann is right that extreme bounds analysis evaluates the importance of (to take Fuhrmann’s example) rebel leadership across a wide range of models—some of which include variables that are plausibly post-treatment to rebel leadership. If we are certain that that these variables are caused by rebel leadership (and not by other factors that cause proliferation), then we should, in fact, expect that extreme bounds analysis will underestimate the effect of rebel leadership.
However, I don’t think we enjoy that kind of certainty. For example, past disputes or the presence of domestic unrest are not just post-treatment to rebel leadership. They may also cause rebel leaders to assume control of governments, and may also cause proliferation directly. If so, excluding these variables may overestimate the effect of rebel leadership as much as including them may underestimate it. In essence, we can find arguments for both including and omitting these variables from our analysis. How, then, should we estimate the effect of rebel leadership in the face of uncertainty about whether these variables should be included in our models? In my view, it is precisely when we have uncertainty of this sort that extreme bounds analysis is most valuable. It allows us to assess—in a systematic way—the robustness and sturdiness of existing findings to the inclusion or exclusion of a variety of plausible explanatory variables.
Bleek questions why certain variables perform well in the analysis and others perform badly. I agree with him that further investigation into this would be useful. However, the suggestion I make in the article—that variables that are highly causally proximate to proliferation tend to perform well, while further removed causes tend to perform less well—fits the two factors that Bleek highlights. The receipt of sensitive nuclear assistance performs relatively well. This may be partly explained by the fact that receiving such assistance, as Scott Sagan and Alex Montgomery have argued, is very conceptually close to nuclear exploration itself. The second variable Bleek mentions—having a nuclear-armed ally—affects proliferation incentives less directly, which may explain why it performs somewhat less well.
Solingen and Malnight, as well as Seschser, raise important points about the role of variables and mechanisms that quantitative scholars have struggled to measure. These include norms, leader psychology, and the attitudes of domestic coalitions towards the global economy. I agree with them: quantitative methods require variables that can be measured “at scale” across thousands of observations. This often means that quantitative scholars are forced—through no fault of their own—to use proxies for the underlying, theoretically relevant, but hard to measure variables that we care about. When dealing with rare events—where mismeasurement in one or two cases may significantly alter our results—this can prove problematic. Similarly, quantitative methods can shed light on causal mechanisms (e.g., Imai et al 2011). But the assumptions required to do so are generally extensive. An advantage of qualitative methods is that by focusing on fewer cases, they allow us to pay more attention both to measuring the variables in the cases under examination and examining the mechanisms and causal processes at work.
My article offers a number of avenues for future research. Many of these suggestions are relevant to qualitative, quantitative, and mixed-methods scholars alike. As Mehta correctly points out, responsibility for improving our understanding of the causes of proliferation is shared across scholars working in different methodological traditions and across academic disciplines. I am not the first to make suggestions of this sort. But I am happy that the participants in the symposium endorse many of them. For example, Bleek and Sechser both agree on the need to focus on improving the measurement of important variables. Sechser also affirms the importance of quantitative scholars moving beyond tests of statistical significance in assessing the importance of variables. Solingen and Malnight, as well as Mehta, endorse theorizing additional observable implications of our theories. Such theorizing would allow for additional tests—whether qualitative or quantitative—of our theories. It would also potentially allow scholars to place less reliance on country-year data. Solingen and Malnight agree with the need for further exploration of how the causes of proliferation have evolved over time. Lastly, Fuhrmann’s contribution uses his recently-collected data on nuclear latency to hint at the potential utility of new data sources. He shows that more explanatory variables may be robustly correlated with this new outcome variable than with the outcomes that the literature has used up to this point. This is a promising finding that future work should build on.
Overall, this symposium suggests that there are many fruitful paths for future research on the causes of proliferation to pursue. I look forward to watching this literature develop in the coming years.
Bell, Mark S. 2015. "Examining Explanations for Nuclear Proliferation." International Studies Quarterly.
Montgomery, Alexander H., and Scott D. Sagan. 2009. "The Perils of Predicting Proliferation." Journal of Conflict Resolution 53 (2): 302-328.
Imai, Kosuke, Luke Keele, Dustin Tingley, and Teppei Yamamoto. 2011. "Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies." American Political Science Review 105 (4): 765-789.