Bibliometrics Higher education Science policy
A short history of research impact
September 14, 2017
2
, , , , , ,

During the 2017 Science & Technology Indicators (STI) conference in Paris, a number of discussions touched on impact assessment, which has been a topic of growing interest within the research community. That researchers are increasingly aware of impact, impact pathways and impact assessments comes as no great shock, given that the research policy community is increasingly focusing on impact as the basis for funding decisions. The discussions at STI raised some substantive concerns with the current trajectory of discussions about research impact. In this post, I’ll lay out some relevant history (as I understand it) that contextualizes current discussions about impact. In the next installment, I’ll summarize those points from STI 2017 that stood out to me as the most insightful (and provocative), drawing on the history laid out here in order to explore what I think these comments reflect about the underlying research system.

19th century

Let’s start with an overly simplistic story. A long time ago—probably in the 19th century when universities started to speciate into thematically focused departments, but before these divisions congealed into a structure for institutional measurement and control—ideas were the major currency of the academic realm. To speak of the “impact” of an academic’s work was to speak of their ideas influencing and shaping the work of their contemporaries as well as future generations of scholars. These discussions about the impact of research were mostly by conducted by academics, for academics, and with impact meaning specifically an impact on other academics.

Ideas were propagated through institutions and supervisory relationships. Ideas were also exchanged in writing through peer-reviewed journals, which started up around the same time the speciation began and grew in number as thematic speciation progressed. These features of the landscape have come to shape how it is that most academic work is assessed, in shorthand, persisting to this day. We ask where researchers studied and who supervised them (usually focusing specifically on their doctoral work), which prestigious institutions they’ve worked at, and which journals they publish in. These continue to be important forms of professional capital in the academic world.

Early & mid 20th century

During the Second World War, research budgets ballooned in order to support the development of (among other things) technology that was deemed critical to victory. With the hostilities drawing to a close, the research enterprise was obviously going to undergo a radical transformation of one kind or another. This is the era in which Vannevar Bush’s renowned and influential report “Science, the Endless Frontier” was submitted to President Truman.

With this transformation, science and technology indicators began to flourish, offering the promise to contribute to an effective peacetime research system. The first Frascati Manual was put together in 1963. Much of this early approach to measurement deals with standardization for administrative data about personnel and spending, providing definitions for employment categories (researchers, technicians, etc.), for sectors of research funding and performance, for thematic taxonomies of research topics, and for categories such as basic and applied research, technological development, and the like. Other important developments took place simultaneously in the nascent field of bibliometrics, using data obtained through bibliographic fiches from peer-reviewed papers and journals. Initial work developed the progenitors of our current indicators of publication output, citation rates, and so forth.

Late 20th century

Indicators have evolved considerably since this time. Public management has evolved as well, especially since the 1980s, when there was a notable increase in demand for measurability. This pressure from public management began to be felt in the 1990s, with bibliometric indicators taking on a large (and growing) role in research management and policy. However, the fundamental ontology of these measurements—the kinds of things that exist in the world, so far as the indicators can see—is still dominated by primarily academic categories: research papers, citations, institutional renown, intellectual genealogy.

Since the 1980s, university administration has become increasingly structured and formalized. Functions that used to be accomplished by researchers themselves under the heading of academic service have been progressively overtaken by a professionalized caste of research managers. In fact, the growth of these administrative positions (and of the salaries of senior university administrators) has been the largest area of growth in many university budgets. Between 1988 and 2008, administration costs at Canadian universities jumped from 12% of overall operation budgets to 20%.

The financial dimension clearly hasn’t been to the advantage of the professoriate, but they haven’t had their time freed up by this shift of administrative tasks to a dedicated staff: there are simply more administrative routines in place. The net effect is that professors now have less administrative control of universities, but also have more administrative work to do than they did when they were administrators. Administrators arrive with strategic plans, deliverables and other bureaucratese (of questionable value), most of which is rolled out in an attempt to improve the ranking of their institution along one of a growing number of competitive assessment exercises. Notably, both the strategies to improve one’s ranking and the methodologies to determine one’s ranking are artifacts from the administrator’s world.

Whither the impact?

It is in this context that the Financial Crisis of 2008 took place. If before the Crisis there was not already an intense pressure for public dollars to lead to explicit, demonstrated positive impact for society, that pressure certainly has reached a fever pitch in the years since. Public funding of research—not a trivial share of public spending—of course found itself needing to respond to this call as well. It is at this point that the primary goal of the competition changes: “societal” “impact” is called for, despite the fact that neither of these concepts is given a clear definition.

I’ll break off the discussion here for now. With this history in hand, we’re in a good position in the next installment to start diagnosing the dynamics currently at play in discussions of research impact assessment.

 

All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

0

About the author

Brooke Struck

Brooke Struck works as a policy analyst at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

Team diversity widget: how do you measure up?

Collaboration and disciplinary diversity are hot t...

Read more

Canadian Science: mandate update from Minister Duncan

Kirsty Duncan (Canadian federal Minister of Scienc...

Read more

The new face of the science–policy interface

The new Chief Science Advisor position is the top ...

Read more

There are 2 comments

  • I think there might be an element of the story left out — the role of the “promises” made by the academic research community to address or “cure” societal ills to justify repeated requests for increased public funding. I believe the funding advocacy opened the doors for the need for metrics (and administrators) so that it could be quantitatively demonstrated that the promises were being fulfilled.

    • Brooke Struck says:

      I quite agree! With the transition from mostly private funding to mostly public funding for research, there had to be some agreement that the public would enjoy benefits in return for financial support. This is the social contract of science. But the calls for _demonstration_ that these promises were being fulfilled have intensified in more recent years, or so I would argue.

      The earlier model of science delivering social benefit, according to which there was a general but somewhat ethereal diffusion of advances from the research community outwards, simply isn’t accepted anymore. We’re hungrier now for tangible, demonstrable impact—and with that comes a shift towards more measurement, as well as behaviours from researchers that seek to actually make that impact more tangible.

      That is not to say that science had no impact before, that it wasn’t fulfilling its promise. Rather, the call to demonstrate that the promise is being fulfilled is feeding back into the mechanisms themselves through which impact takes place.

      Thanks for the comment, Susan!