What do we really measure when we talk about “scholarly impact”?

I read Jeff Colgan’s article with great interest. At issue here is what do we mean when we talk about “scholarly impact.” As Colgan demonstrates, there are many different ways to measure “impact,” and the most common metric, publication citations, is deeply flawed. Colgan’s analysis points to tremendous “noise in the system” in the most widely used citation metric, Google Scholar. After reading his analysis, I am beginning to think that using Google Scholar as measurement of scholarly impact is not much better than using the number of LinkedIn contacts a person has as measurement of career success. To correct for Google Scholar problems, Colgan introduces a new metric, which measures research impact by capturing the frequency with which scholarly articles are assigned in graduate IR syllabi.

 

I am certainly sympathetic to Colgan’s project and find great value in exposing the distortions that the focus on citations creates. The research impact dynamic, however, operates within a much broader professional, sociological, and political environment of US academia, and there are some significant structural issues that influence perceived “impact,” which Colgan’s article leaves unproblematized.

 

The first issue has to do with using graduate syllabi as an authoritative source for research impact. As Colgan himself acknowledges, graduate syllabi themselves are ripe for critical treatment. They systematically overrepresent articles by male authors (Colgan 2015), articles published in U.S. journals, and as my own research shows, articles with a rationalist epistemological approach (Subotic forthcoming). The content of graduate syllabi can just as much be the result of an academic self-fulfilling prophecy, where only a certain kind of scholarship is taught in the top schools because only a certain kind of scholar is hired into these top schools (more of these findings are reported in Subotic forthcoming). There seems to be an assumption of meritocracy here (research included in the graduate syllabi is of top “quality”) that is deeply problematic and ignores the profound structural inequalities that underpin the academic system such as, for example, its core-periphery structure (Clauset et al. 2015), or its oligarchic nature (Oprisko et al. 2013).

 

Further, the process of constructing a graduate syllabus, as Colgan also acknowledges, is prone to network effects, staleness (once prepared, syllabi may be updated on the margins, but the core structure typically remains the same), as well as similar elite-distorting effects discussed above. At most research institutions, especially the top-ranked ones that are the subject of this analysis, the incentives for publishing dwarf any incentives for quality teaching. This will lead time-crunched faculty to pay much less attention to syllabi research and assign already known works, or mimic syllabi from other peer institutions, and not spend the time needed to truly research new innovative work. It is hard to see how we can ignore these professional practices in syllabi analysis.

 

Finally, the focus on research “quality” and “impact” needs to take into consideration the broader professional, social, and political environment in which scholars work. This obsession with numerical measurements such as Google Scholar follows the corporatization of universities, where an easily identifiable number can be shown to university “stakeholders,” such as legislators or private donors, as measurement of scholarly “value.” This practice has become so normalized that there now exists a “faculty productivity monitoring company” called Academic Analytics, which provides a proprietary index of faculty “productivity” composed of publications, citations, and grants, but neither teaching nor service (for a recent controversy at Rutgers University involving the use of Academic Analytics, see here).

 

While Colgan is right to focus on finding a better metric, I would like us to reflect a bit deeper on what such metrics really tell us about the work we do, the perceived “value” our colleagues and society at large assign to our work, and the consequences of such instruments for the nature and integrity of our scholarship, and the professional environment in which we work.

 

 

WORKS CITED:

 

Clauset, Aaron, Samuel Arbesman and Daniel B. Larremore (2015) 'Systematic inequality and hierarchy in faculty hiring networks,' Science Advances 1(1).

Colgan, Jeff (2015) 'New Evidence on Gender Bias in IR Syllabi,' Duck of Minerva, 27 August, available at http://duckofminerva.com/2015/08/new-evidence-on-gender-bias-in-ir-syllabi.html

Oprisko, Robert L., Kirstie L. Dobbs and Joseph DiGrazia (2013) 'Pushing Up Ivies: Institutional Prestige and the Academic Caste System,' Georgetown Public Policy Review, 21 August, available at http://gppreview.com/2013/08/21/pushing-up-ivies-institutional-prestige-and-the-academic-caste-system/

Subotic, Jelena (forthcoming) 'Constructivism as Professional Practice in the US Academy,' PS: Political Science & Politics.

 

Discuss this Article
There are currently no comments, be the first to post one.
Start the Discussion...
Only registered users may post comments.
ISQ On Twitter