Mind -- and Measure -- the Gap

Paul Avey and Michael Desch’s 2013 survey of 234 senior national security policymakers produces empirical evidence to further a now vibrant discussion on the academic-policy divide. The catalyst for their investigation is a puzzle revealed by the most recent iteration of the TRIP (Teaching, Research and International Policy) survey of IR scholars (Maliniak et al, 2012). The 2011 TRIP survey shows little evidence of substantive engagement by US IR scholars with the policy world. [1] More troubling is what Avey and Desch find on the other side of the divide: the low perceived utility or influence of IR scholarship from the perspective of senior policymakers.

The divide between the so-called “eggheads” of the Ivory Tower and the “wonks” of the DC Beltway is not new. [2]  But it is all the more puzzling in light of the revealed preference of the large percentage of IR scholars to engage in policy-relevant research. [3] Moreover, with the exception of the Congressional attacks on NSF funding for political science, we can also easily find examples of the policy world’s demand for IR research in recent initiatives like the Department of Defense Minerva Initiative and the USAID Higher Education Solutions Network. So if academics want to have more engagement with policy, and policymakers want to derive more utility from academic work, what perpetuates the divide? Through their survey, Avey and Desch provide some informed (if not terribly surprising) reasons for why policy makers do not find contemporary IR scholarship useful: (1) the discipline’s bias for formal modeling and quantitative work clashes with policymakers’ preference for qualitative area studies and historically-informed case studies; (2) there is a clear disconnect between the regional expertise that policymakers need and that which academics currently offer; and (3) time demands on policymakers preclude opportunities (and incentives) to slog through the typical jargon-laden peer reviewed academic article.

But let’s turn the question back towards the Ivory Tower for a moment.  What can we hypothesize about why we, as scholars intrinsically interested in engaging policy, are in practice so reluctant to do so on terms amenable to those working in policy? Ultimately this boils down into another rather obvious, but as of yet untested claim: IR scholars may be intrigued by policy engagement, but are poorly incentivized to devote scarce time and resources to endeavors that do not mesh with our profession’s expectations and norms for hiring, tenure and promotion. We do not engage in policy relevant work because the risks are high, the payoffs are uncertain, and demonstrating our “impact” in the policy world is inherently difficult.

This academic-policy divide exists, and in fact may be growing, because of how we currently train and socialize our graduate students in doctoral programs in the U.S.  Simply put, our students are not well versed in how to speak to policy makers and they are not encouraged to pursue research and publication paths that would invite those conversations. Area studies and careful case study work, so prized by policymakers, is increasingly eschewed by IR scholars. Avey and Desch themselves implicitly describe this work is “unsophisticated” and not “cutting edge.”  PhD students face inordinate pressures to tech-up in the fanciest methods du jour, publish in top academic journals, and get out of grad school as fast as possible. This certainly precludes time for learning additional languages critical to area studies, extensive fieldwork, or engaging in interdisciplinary coursework. More critically, pursuing a publication in Foreign Policy, writing op-eds, or spending time maintaining a serious policy analysis blog represents a tremendous opportunity cost for a young scholar hoping to land in the pages of International Organization or top university book press before she enters the tight academic job market. [4]

Then of course come the tenure, promotion and merit review processes. How many political science or IR departments in the US – and their higher administrations – formally give credit for non-peer reviewed work, even if it lands in the hallowed pages of Foreign Affairs or the New York Times? How many external reviewers are prompted to discuss (in positive terms) the policy relevance of a young scholar’s work? Tenure and promotion processes, particularly at RU1s, arguably discourage such scholarship. Ironically, I have to confess, this seems to hold even for policy schools, where op-eds and policy reports are often seen as the profligate icing on the cake of real research.

Moreover, in a discipline so obsessed with the mantra that “only things that can be counted, count,” some departments have reverted to practices of ranking publication outlets according to their perceived status or influence in the discipline, relying on “objective” indicators such as journal impact factor scores. As the top IR journals (defined as those with the highest impact factor scores and – critically – peer reviewed) show increasing signs of quantitative bias in what they choose to publish, this reifies incentives to select research questions based on the method that will get an author into a top journal rather than choosing a method and outlet best suited to answering an important research question that might peak the interests of policymakers. The result is an increasingly inward-looking discipline that values influence within its inner circles more than its sway with external audiences. The divide widens.

How then, in the face of such strong professional norms and incentives, do we encourage the kind of scholarship that satisfies the policymakers in Avey and Desch’s survey? Merely talking about the academic-policy divide will not resolve it. We cannot rely solely on efforts to expose the academic-policy divide in hopes of shaming IR scholars into reorienting their research and publication strategies or rely on efforts to entice them into the policy fold through lucrative funding opportunities. More proximate logics of consequences and appropriateness will inevitably prevail.

Cynically, but pragmatically, I argue that installing value (and thus incentivizing) policy engagement in IR requires constructing the means to measure policy influence in a manner commensurate to how we measure scholarly influence. That, of course, begs the obvious question: how do we begin to empirically observe and measure policy influence and impact? One relatively easy way – through various metrics such as twitter followers and “retweeting” volume, blog activity, and media citations -- may be less indicative of influence than self-promotion, with no guarantee that policymakers are actually listening or acting upon scholars’ insights pushed through those channels.  Grant activity may be another imperfect, but measurable signal. Are scholars not only winning grants, but also being actively solicited, by public sector agencies or think tanks for contracted work and future grant submissions? Can scholars in turn trace the impact of that work into the policies and practices of their “clients”? Is their work not only known in the policy circles (which may be a residual of a whole bunch of things), but discernibly “in demand”? And, if so, how do you measure that demand in a market of ideas?   

Finally, the most direct measure of a scholar’s impact or influence on policy might be the testimony of policymakers themselves, elicited directly through polls or review letters, or indirectly assessed through serious mention of scholars and their work in policy speeches, testimonies, and legislation. Gathering such data would be onerous and subject to a host of arguments on sampling criteria and measurement criteria. Moreover, which policymakers have sufficient status and credibility to evaluate scholars’ work (especially since policy practitioners are arguably even more silo-ed in their areas of work than academics are)? How should they evaluate academic scholarship for policy relevance and impact?  How many citations or mentions does it take to add up to some benchmark of policy impact or influence, and in what venues?  

More questions than answers abound here, but clearly I think breaking down the gaps between academia and the policy world requires serious dialogue on how we might observe and measure policy influence or impact (both of which should be distinguished from mere “presence,” which can all too easily conflate aggressive networking with actual effects on policy). This goal of measurement ironically entails much more qualitative attention to the memetic processes through which scholarly ideas attract attention, gain traction, and shape policy making at all levels. It also requires much more attention to the sociology of our own discipline, and an open and honest discussion of the professional norms and incentives that deter transgressions outside our IR’s ivory tower walls.


[1] In TRIP’s 2011 survey of 3,464 IR scholars in 20 countries, only 11% of respondents reported that their research was primarily “applied” (versus “basic research) and 15% reported their research was both, but leaning towards applied. In this instance, the TRIP surveyors defined basic research as “research for the sake of knowledge, without any particular immediate policy application in mind,” whereas applied research “is done with specific policy applications in mind.” (Maliniak et al, 2012: 37). When asked about the academic-policy divide, 37% reported that they thought the gap was growing, and 39% reported it was the same as 20-30 years ago. Only 23% think the gap is shrinking (Ibid: 66). At the same time, 90% think there should be a larger number of links between the academic and policy communities (Ibid: 67). Finally, 54% reported that assigning greater weight in personnel decisions to publications in policy journal would have a beneficial impact on our academic discipline, but only 26% thought that providing stronger incentives to contribute to blogs and other popular media outlets would have a similar beneficial effect (Ibid: 69).

[2] For a multi-disciplinary take on this issue, see the “Puzzles versus Problems” symposium in Perspectives on Politics (December 2010).

[3] 33% reported that policy relevance motivated their research (Maliniak et al, 2012: 38) and nearly 50% have consulted or worked in a paid or unpaid capacity outside of academia.

[4]The TRIP survey indicates that 88% and 86% of respondents, respectively, report that publishing a single-authored peer reviewed journal article or university press book is most important to advance their academic career (Ibid: 58). Interestingly, the TRIP survey also reveals that 66% of respondents think contributing to a blog should count as “service” and 29% think it should count as “research” (Ibid: 64).

Discuss this Article
There are currently no comments, be the first to post one.
Start the Discussion...
Only registered users may post comments.
ISQ On Twitter