Bibliometrics Database Higher education Leiden Manifesto Science policy
Interdisciplinarity: the mutual adjustment of concept and indicator
December 8, 2016
0
, , , , ,

In their recently released report, Digital Science describe and assess the relationship between a range of candidate indicators for interdisciplinarity. Their objective is to assess the consistency between the indicators and attempt to discern a front runner in this race. Ultimately, though, they conclude that the indicators produce inconsistent findings, and finish with some remarks about the responsible use of research metrics, many of which are important and timely. Along the way, though, I think they make an important—but tacit—assumption about interdisciplinarity, one that’s worth bringing to the surface to flesh out the picture of how to responsibly overcome the difficulty in measuring and adopting interdisciplinary approaches to research.

Before leaping into the fun, I’ll point out that two of the indicators described by Digital Science in their report were produced by Science-Metrix, the research evaluation firm where I work. I don’t think that Adams et al. have mischaracterized the indicators in their presentation, but I do feel that their application and interpretation of these indicators differs importantly from ours. Outlining our own approach seems valuable here, as it overcomes the main difficulties they’re pointing to. These two indicators build on foundations laid by a number of scholars; they’re not established in-house from the ground up. Furthermore, we deploy these indicators recognizing that they are in a stage of continuing evolution; we explore the underlying microdata extensively to check for consistency and reliability, and to help us interpret the meaning of findings. With these disclaimers behind us, let’s dive in, starting with a synthesis of the Digital Science report.

Adams, Loach and Szomszor compare and contrast five indicators, and these can be divided into three categories: indicators based on the departmental affiliation of contributing researchers, indicators based on fields of research cited in an article, and semantic analyses (of abstracts/project summaries, not full text). Indicators based on departmental affiliation are ranged under the heading of multidisciplinarity (MDR), while citation and textual analysis indicators are ranged under the heading of interdisciplinarity (IDR).

Looking at data from a defined set of countries, over a defined set of years, and using the ANZSRC disciplinary taxonomy, their analyses show that the MDR and IDR indicators have no statistically significant correlation with each other—although the authors highlight the negative but admittedly non-significant correlation, for full rhetorical value of the nudge-nudge-wink-wink variety. However, the authors note that: “Contradiction between two indicators does not mean either are invalid.” Of course, different indicators may simply be tracking different underlying phenomena, explaining discrepancies between them. Nonetheless, Adams et al. use language of “inconsistency” and “contradiction” throughout, and even conclude with the note that these indicators fail a “basic ‘valid and equitable’ requirement.” So much for valid but independent indicators. On this basis, the authors conclude that existing interdisciplinarity indicators should only be used to support—rather than replace—expert judgment, and they make a few recommendations about data quality and availability.

I feel that these indicators have not really been given adequate treatment in their assessment. After all, the disciplinary mix of researchers involved in a given project is a tangibly different matter from the disciplinary mix of sources of knowledge they build upon in carrying out that project (some evidence of which we find in citations). Briggle and Frodeman’s notion of “disciplinary capture” illustrates this point nicely: sometimes in an interdisciplinary group, one discipline’s methods, data sources, evidential thresholds, etc., are allowed to be the judge of the other contributing disciplines. When one discipline is allowed “home-court advantage” in this way, we see disciplinary capture, and it illustrates nicely how gaps can open up between the disciplinary representatives one finds within a mixed team and the actual blend of knowledge on which they ultimately draw. Accordingly, having independent indicators for these two facets of disciplinary makeup is valuable, as they measure two independent (and sometimes contradictory) features.

The authors at Digital Science use this inconsistency between indicators to suggest a disconnect between the metadata used for analysis and the underlying reality, which would undermine the usefulness of quantitative indicators for research management. While representative data is surely an important consideration, one not to be taken lightly and one that’s relevant in the disciplinarity discussion at play here, I feel that they’ve smuggled in an important assumption. “[B]ecause the use of research metadata to create informative indices has been successfully applied in other areas of research management… [p]resent policy research often implicitly assumes that IDR can readily be identified and tracked.” Additionally, they note that: “Many researchers know interdisciplinarity when they see it, but not all see the same thing, and that makes life difficult for research funders and policy makers.” Furthermore, they bring forward the observation that no consistent definition of interdisciplinarity seems to have emerged in the literature.

Adams et al. are concerned about the connection between contradictory indicators and the underlying phenomenon of interdisciplinarity, which policy tools are used to incentivize, but they never consider seriously the possibility that interdisciplinarity itself might not actually be a cohesive, consistent underlying phenomenon. The policy sphere has identified multi/inter/transdisciplinary research as a range of valuable approaches, and is trying to create incentives to promote them. These incentives require the ability to identify and assess these attributes. However, the value of these multi/inter/transdisciplinary approaches (to accomplish what?) can only be based on intuitions: after all, we’ve just acknowledged that we can’t agree on what interdisciplinarity is, nor track it effectively. We all recognize it when we see it, but there’s no agreement on what it is that we’re seeing, or whether we even identify the same things as interdisciplinary. In such a situation, I’m at pains to see how any kind of evidence has been gathered and used as the basis for determining that interdisciplinarity is worth promoting, even if it’s something that I personally value.

The authors highlight the problem of identifying and assessing interdisciplinarity, hanging the problem of false simplicity on the purveyors of indicators—always guilty of pushing a one-number solution to a complex and nuanced problem. But surely the buzzword itself is responsible for a fair share of the perceived unity and clarity of this notion. Maybe indicators have a hard time tracking interdisciplinarity because we’ve actually been fooled by the constant repetition of buzzwords into believing that we have a decent grasp of what the underlying phenomenon really is, when in fact our notion remains vague at best. Meanwhile, we continue using this notion as the basis for decision-making about billions of research dollars around the world—without reliable evidence to demonstrate what this approach is good for (or not good for, now that I think about it).

So, where do we go from here? What responsible actions can we take to improve this situation? Two suggestions from Adams et al. seem to be right on the mark: first, we need to be more explicit in considering what our indicators are telling us (and what they aren’t); second, we should take a “framework” approach to measuring disciplinarity, approaching it from many angles to capture its many facets. Given the concern that I’ve raised here about the vagueness of our communal underlying notion, I would put a further suggestion forward here. The indicators canvassed by Digital Science are explicit in what they are assessing: some measure the collaboration of authors across disciplines, some measure the fields of research integrated in the resulting work (using citations as a proxy, albeit an imperfect one), some assess the semantic proximity of the text to the technical vernacular of various fields. I feel that I can grasp quite tangibly the phenomena that these indicators are tracking.

However, while being explicit and forthcoming about the meaning of these indicators is surely an important part of using them responsibly, I think the Digital Science authors don’t go far enough in exploring the potential of these indicators to actually help us to clarify our notions about the underlying phenomena themselves. For example, when confronted with the response that these indicators still fall short of capturing the full breadth of what’s meant by “interdisciplinarity,” having a few indicators in hand (each with an explicit meaning) allows me to push back and ask what specifically is still not being captured. This kind of question brings us to a juncture—a critical intersection of the policymaking and research evaluation spheres—where we can make considerable headway in bringing clarity to a term that gets applied liberally but defined loosely.

This meeting point is of great importance in the evidence-based policymaking process, one where policy research firms need to remain sharp: not just communicating research findings across The Great Divide (like lobbing a grenade over a wall and plugging one’s ears), but actually working with policymakers to define an assessment approach that maintains the delicate equilibrium of being possible to answer, worth answering, and packaged in a way that’s politically sensitive enough to actually move forward. Working towards such an assessment approach can contribute importantly to defining the object itself that the policy is targeting.

This problem goes beyond communication of findings, reaching towards the co-constitution of knowledge and its collaborative implementation in action—but we cannot recognize the importance of such moments if we don’t acknowledge the possibility that a concept from the policy sphere might actually still be under-defined by the time it finds itself at our doorstep, to be further explored in conversation with policymakers. These are difficult conversations to have, made even more so by a host of institutional, hierarchical and cultural realities. Finding an indicator for interdisciplinarity is just the tip of the iceberg.

 

All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

1

About the author

Brooke Struck

Brooke Struck works as a policy analyst at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

Team diversity widget: how do you measure up?

Collaboration and disciplinary diversity are hot t...

Read more

Canadian Science: mandate update from Minister Duncan

Kirsty Duncan (Canadian federal Minister of Scienc...

Read more

The new face of the science–policy interface

The new Chief Science Advisor position is the top ...

Read more

There are 0 comments