During the 2017 Science & Technology Indicators (STI) conference in Paris, a number of discussions touched on impact assessment, which has been a topic of growing interest within the research community. That researchers are increasingly aware of impact, impact pathways and impact assessments comes as no great shock, given that the research policy community is increasingly focusing on impact as the basis for funding decisions. The discussions at STI raised some substantive concerns with the current trajectory of discussions about research impact. In this post, I’ll lay out some relevant history (as I understand it) that contextualizes current discussions about impact. In the next installment, I’ll summarize those points from STI 2017 that stood out to me as the most insightful (and provocative), drawing on the history laid out here in order to explore what I think these comments reflect about the underlying research system.
Let’s start with an overly simplistic story. A long time ago—probably in the 19th century when universities started to speciate into thematically focused departments, but before these divisions congealed into a structure for institutional measurement and control—ideas were the major currency of the academic realm. To speak of the “impact” of an academic’s work was to speak of their ideas influencing and shaping the work of their contemporaries as well as future generations of scholars. These discussions about the impact of research were mostly by conducted by academics, for academics, and with impact meaning specifically an impact on other academics.
Ideas were propagated through institutions and supervisory relationships. Ideas were also exchanged in writing through peer-reviewed journals, which started up around the same time the speciation began and grew in number as thematic speciation progressed. These features of the landscape have come to shape how it is that most academic work is assessed, in shorthand, persisting to this day. We ask where researchers studied and who supervised them (usually focusing specifically on their doctoral work), which prestigious institutions they’ve worked at, and which journals they publish in. These continue to be important forms of professional capital in the academic world.
Early & mid 20th century
During the Second World War, research budgets ballooned in order to support the development of (among other things) technology that was deemed critical to victory. With the hostilities drawing to a close, the research enterprise was obviously going to undergo a radical transformation of one kind or another. This is the era in which Vannevar Bush’s renowned and influential report “Science, the Endless Frontier” was submitted to President Truman.
With this transformation, science and technology indicators began to flourish, offering the promise to contribute to an effective peacetime research system. The first Frascati Manual was put together in 1963. Much of this early approach to measurement deals with standardization for administrative data about personnel and spending, providing definitions for employment categories (researchers, technicians, etc.), for sectors of research funding and performance, for thematic taxonomies of research topics, and for categories such as basic and applied research, technological development, and the like. Other important developments took place simultaneously in the nascent field of bibliometrics, using data obtained through bibliographic fiches from peer-reviewed papers and journals. Initial work developed the progenitors of our current indicators of publication output, citation rates, and so forth.
Late 20th century
Indicators have evolved considerably since this time. Public management has evolved as well, especially since the 1980s, when there was a notable increase in demand for measurability. This pressure from public management began to be felt in the 1990s, with bibliometric indicators taking on a large (and growing) role in research management and policy. However, the fundamental ontology of these measurements—the kinds of things that exist in the world, so far as the indicators can see—is still dominated by primarily academic categories: research papers, citations, institutional renown, intellectual genealogy.
Since the 1980s, university administration has become increasingly structured and formalized. Functions that used to be accomplished by researchers themselves under the heading of academic service have been progressively overtaken by a professionalized caste of research managers. In fact, the growth of these administrative positions (and of the salaries of senior university administrators) has been the largest area of growth in many university budgets. Between 1988 and 2008, administration costs at Canadian universities jumped from 12% of overall operation budgets to 20%.
The financial dimension clearly hasn’t been to the advantage of the professoriate, but they haven’t had their time freed up by this shift of administrative tasks to a dedicated staff: there are simply more administrative routines in place. The net effect is that professors now have less administrative control of universities, but also have more administrative work to do than they did when they were administrators. Administrators arrive with strategic plans, deliverables and other bureaucratese (of questionable value), most of which is rolled out in an attempt to improve the ranking of their institution along one of a growing number of competitive assessment exercises. Notably, both the strategies to improve one’s ranking and the methodologies to determine one’s ranking are artifacts from the administrator’s world.
Whither the impact?
It is in this context that the Financial Crisis of 2008 took place. If before the Crisis there was not already an intense pressure for public dollars to lead to explicit, demonstrated positive impact for society, that pressure certainly has reached a fever pitch in the years since. Public funding of research—not a trivial share of public spending—of course found itself needing to respond to this call as well. It is at this point that the primary goal of the competition changes: “societal” “impact” is called for, despite the fact that neither of these concepts is given a clear definition.
I’ll break off the discussion here for now. With this history in hand, we’re in a good position in the next installment to start diagnosing the dynamics currently at play in discussions of research impact assessment.
All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.