Bibliometrics Science policy
The death of indicators
November 1, 2017
0
, , , , ,

In last week’s post, I presented some of the major points of Rémi Barré’s keynote speech at STI 2017. In brief, he divides the evolution of S&T indicators into three phases. The first phase is one of indicators promising to elucidate (and thereby improve) the inner workings of science. The second phase is one of them being co-opted into the neoliberal turn, exposing scientific research to competitive pressures that Dr. Barré identifies as pushing science into multiple crises. The third phase is a casting off of the complicity with neoliberalism, using indicators to start opening up discussions about science & technology rather than shutting them down. In this post I’ll expand on this third phase.

It’s worth pausing to consider the magnitude of what Dr. Barré is saying. “Indicators are dead! Long live indicators!” he proclaims. What can be meant by this? That indicators are dead in their complicity with neoliberal ideology? Dead in their function to pave over (and ingrain) ignorance about science & technology? Surely tales of the death of indicators have been grossly overstated? Dr. Barré is not recording a death by natural causes—he’s calling for regicide. The third phase is the age we are being called to bring into being, not one that has already come to pass.

Dr. Barré elaborated only very little on the concrete action he envisions under the heading of bibliometrics casting off its complicity with the neoliberal order, or on how indicators might open up discussions about science governance rather than closing them down. However, a picture can start to be pieced together based on some points in the margins of his address, along with other discussions that took place throughout the STI conference.

Summative vs. formative assessment

The line of discussion that I feel most vividly helps to provide content for this call to action is the difference between summative and formative assessment. Summative assessment is usually conducted at arm’s length, using quantitative tools, for the purposes of auditing and control. In education, summative testing is used to determine the grade that a student will receive—a numerical representation of their achievement. Formative assessment is more participatory, often employing qualitative methods, for the purpose of learning and improvement. Coming back to our example of education, a formative assessment would be an assignment or quiz that is meant to identify the gaps in a student’s capabilities, so as to enable targeted corrective action and improve their skills.

Summative assessments are often more definitive, whereas formative assessments benchmark one stage in what is implicitly a continuous process. Because of these features, summative assessments are often used to measure competitors relative to one another, and the subjects themselves have a vested interest in downplaying their weaknesses as much as possible. By contrast, formative assessment often puts a person (or a program) in competition primarily with themselves, and the subjects will only benefit from the process if they are as forthright as possible about highlighting their shortcomings in order to be able to overcome them.

Towards a formative mode for indicators

I see a shift towards formative assessment as responding to the challenge that Dr. Barré has issued to the indicators community, for several reasons. First, because formative assessment is participatory, there is a greater involvement of all parties in the evaluation process; each has a role to play, and they must collaborate in order for the process to play out. This integration seems to fulfill Barré’s call for the indicator community to cast off its tacit complicity in the neoliberal turn: far from tacit complicity, the indicator community would be taking on a much more active role in research management.

This integration of bibliometric research into governance may make some uneasy, challenging the traditional separation of research from policymaking. However, that separation has ostensibly brought us to our current point of multiple crises, so surely trying another approach couldn’t be so much worse? And any substantial change is bound to come with some discomfort. I contend that we need to be much more active in policy discussions.

Second, I see formative assessment as supporting a shift towards indicators as a way to open up discussion rather than to close it down. When we compete against others, the easiest way to adjudicate is to have one clear, unambiguous criterion for assessment—hence the love affair with composite indicators that reduce everything to a one-number solution. As participants in that race, we easily let our goal become just to beat out the competition.

When we compete against ourselves, there’s more rhetorical space to reflect on what it is that we actually want to achieve. What is worth measuring, such that winning on that score would actually be worthwhile? What is worth achieving? This question seldom comes up in research evaluation, or it’s shut down immediately by identifying citations as impact (naturalizing indicators, as I discussed last week). Once this question is back on the table, we realize that the vertical integration of formative assessment does not only reach out on one side from the policymaker to the evaluators and the researchers being evaluated, it even reaches out the other side to the political sphere as well. This certainly would be a more open discussion than we presently see.

The indicator community thus throws off tacit complicity in favour of explicit engagement—not simple critique from the safety of the sidelines, but active participation right in the thick of the action. And this engagement serves to open up discussions rather than closing them down.

Making it happen

How can any such shift be realized? The indicators community cannot shift these dynamics by any decision of their own. Policymakers are worried about accountability, and one can appreciate that a one-number solution offers them as clear an answer as they can get, as well as all the rhetorical backing that they would need to justify themselves if ever their judgment were called into question. Criticisms that this practice is untenable will fall on deaf ears without an alternative.

So that is exactly what the indicators community must provide. The increase in ambiguity for policymakers to manage must be offset by benefits elsewhere. One benefit would be to commit ourselves to helping out in this process: there’s more work to do, and we can be the first to line up with shovels in hand. A second, hopefully more important benefit would be to show that a formative approach to assessment will actually lead to better research—that participatory assessment yields better results.

Of course, to increase the value of that benefit, we need to create a situation where policymakers are accountable primarily for the outcomes that they achieve rather than the processes that they undertake. “I took the most reasonable course of action with the information I had.” We may well support such a statement, but not in a situation where we know that the information they’re working with isn’t as good as it needs to be—especially when we ourselves are the information providers, and know that better is possible.

 

Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

1

About the author

Brooke Struck

Brooke Struck works as a policy analyst at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

The new face of the science–policy interface

The new Chief Science Advisor position is the top ...

Read more

Is non-science non-sense?

At the beginning of November, I attended the Canad...

Read more

Metrics: state of the alt

Discussions of research having impact were for a l...

Read more

There are 0 comments