Bibliometrics Science policy
Indicating a neoliberal tendency
October 25, 2017
2
, , , , , ,

Continuing on from my previous discussion of impact, the second keynote speech at the 2017 Science & Technology Indicators (STI) conference in Paris was given by Rémi Barré of IFRIS, who echoed many of the points raised by Ismael Rafols. Barré’s call to action—riffing on a very traditional theme—was, “Les indicateurs sont morts! Vive les indicateurs!” Indicators are dead! Long live indicators! The call was provocative, and his talk highlighted some interesting ways in which the struggles we face in research & innovation management are symptoms of a broad and powerful trend in the political sphere: neoliberalism.

Dr. Barré divided the history of research & innovation indicators into three blocks. In the first block, situated roughly in the earlier part of the twentieth century, the development of these indicators held the potential to illuminate the functioning of science. This illumination promised to improve our ability to do science. The first historical phase is defined by its conception of indicators as fundamentally elucidating. Indicators themselves for research & innovation were also quite modest, having only just taken their start. Data sources and computing power were much less developed than they are now, which had a huge impact on the availability and complexity of indicators, as well as their diversity.

However, with the rise of Thatcher, Reagan and neoliberalism in the 1980s, there was considerable shift in the vision of public management, along with which Dr. Barré identified a parallel shift in our attitude towards indicators and science. His definition of neoliberalism is the exposure of public programs—wherever and whenever possible—to competition and to market forces. With universities moving more and more towards a corporate-style approach to governance in the 1990s, this broader shift would come to have important impacts on research. According to his interpretation, indicators delivered research into the hands of neoliberalism: measurement and rankings are fundamental tools with which to establish a basis for competition.

The vision of indicators in this period shifted to what Dr. Barré calls their “agnotological” function: indicators as manufacturing ignorance about the inner workings of science rather than elucidating those inner workings. This is the second age of indicators, and during this period, the drive to optimize according to the dimensions tracked by the indicator takes precedence over the drive to use the indicators as a way to understand what is going on. Competition takes the place of elucidation as the main value of indicators. We no longer problematize, we simplify.

Of course, another interpretation of this shift is possible. If we have good indicators, one might assume that using them for management—for instance, by integrating a number of indicators to create university leaderboards—would lead to an overall improvement in the research ecosystem. After all, if what we’re measuring is of any relevance, then surely improving along those dimensions is an improvement tout court. I won’t address that point here, as doing so would be overly ambitious.

However, I will remark that as soon as we create leaderboards, we immediately create winners and losers; as soon as we start benchmarking against an average or a median, we immediately create a situation where some are “lagging behind the group.” This sense of competition, and the urgency that it creates, is completely divorced from the health of the overall system or any of its members. Even if everything were working just great, leaderboards and benchmarking would suggest that (roughly) half the group is “underperforming.”

Another problematic dimension for the shift described by Barré is that every indicator encodes a certain worldview and a set of values. Dr. Barré raises the concern that the neoliberal view of indicators paves over these realities. The indicator is allowed to take the place of the real thing that it’s meant to indicate, citations gradually come to be what impact means, citation = impact. Some party or other will always benefit from the values encoded in an indicator.

When indicators are allowed to become naturalized in this way (treated as objects in nature rather than measurements of it), those who benefit most from those indicators end up in the enviable rhetorical position of claiming that their advantages are simply natural, not the result of anyone’s choice but rather just the way that things are. The less fortunate end up in a rather unenviable position, because any critique they level at the system is too easily dismissed as sour grapes, just trying to put in place an “unnatural” system that would provide them with “unfair” advantage, seeking to replace the system only because they are losing under it. Those who are winning the game are the only ones who may critique its rules, and they are the ones with the greatest interest in the indicator game remaining one of simplification rather than problematization.

The initial promise of indicator development was to improve science, specifically through elucidation of its functioning. The neoliberal turn retained the goal of improving science, but through competition rather than through elucidation, maintains Barré. Ultimately, however, he argues that these intense competitive pressures—refracted through the lens of the specific indicators we use—have brought science to the point of multiple crises rather than leading to its improvement. We are staring down the barrels of a reproducibility crisis, a crisis of confidence in science (both internal and external), a crisis of relevance, and a number of other crises depending on whom you ask.

According to Barré, the scientometric community—developers and purveyors of R&I indicators—have been aware that their work enables the neoliberal turn and have been uncomfortably complicit in this virage through a lack of organized resistance to it. This complicity has lasted far too long, he says, and in my next post I’ll discuss the positive program that he outlines, under a heading of something like organized resistance. This is the third age of indicators to which Barré alludes.

 

Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

0

About the author

Brooke Struck

Brooke Struck works as a policy analyst at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

The new face of the science–policy interface

The new Chief Science Advisor position is the top ...

Read more

Is non-science non-sense?

At the beginning of November, I attended the Canad...

Read more

Metrics: state of the alt

Discussions of research having impact were for a l...

Read more

There are 2 comments

  • Brooke – a very lucid and useful essay. I read your later essay on the death of indicators first – led there by your SciSIP post today. In the death of indicators I wasn’t quite sure what you or Barre were referring to by neoliberalism since that is also a term that has been co-opted in many respects. This essay cleared that up nicely and framed the issues in a useful political and policy context.

    My specialization is economic development and in particular the role of innovation in driving economic growth. This essay helped me put a more formal and theoretical frame around my observations of innovation and economic development policies and indicators. First, that since the 1990’s we have to often shaped policies around indicators in a very simplistic way. More pointedly, we have pursued economic development policies based on what we could measure and translate into a jobs metric.

    This drove investments that were heavily skewed towards capital projects and for a long time ignored important investments in building and strengthening innovation and entrepreneurial ecosystems because they were hard to measure and did not translate well into “jobs created” within the typical two or four year political term. This tendency has started to change in the US with efforts like Manufacturing USA (formerly NNMI) and Kauffman Foundation’s ESHIP initiative.

    But we still lack good metrics and that is what I am working on. One thing I’m clear on is that we need to get away from comparative metrics (indexes and other metrics that compare one region to another) because they are useless to practitioners in the field. I’m focused on (formative, to use your terminology) indicators and tools that help practitioners at the local level do their jobs and improve their communities. From there it is easy enough to aggregate upwards. It’s also easy enough to compare such indicators across geographies. In the end, those that want to focus on comparison and competition can still do so.

    As a practical matter, the way to sell this kind of change to an entrenched neoliberal establishment is to focus on value-added. This new approach and these new metrics provide similar and perhaps modestly improved comparative metrics at the policy level. However, the real value added comes from the way that such new metrics help practitioners in the field implement those policies faster and more effectively.

    Thanks again for some great writing.

    • Brooke Struck says:

      Thanks very much for your comment, Scott, and it sounds like you’re doing interesting and valuable work.

      One point that I would re-emphasize is that we should be thinking about the roles of various stakeholders in designing and implementing our measurement frameworks. It sounds like you’re keenly aware of the needs of users in your design. Have you also engaged the “objects” of your indicators—the people that are measured by them? Are you taking a primarily arm’s-length or primarily collaborative approach (or if you’re mixing them, how do you manage the mixture)?

      Thanks again, and looking forward to hearing more.