Bibliometrics Science policy
Taxonomy, objectivity and expectation
June 6, 2016
, ,

Interdisciplinarity is garnering a lot of interest in the science policy community right now—look no further than the legions of researchers desperate to highlight the interdisciplinary potential of their work in an attempt to latch onto the latest key that unlocks brimming coffers. A good deal of that policy interest has been channeled into attempts to develop robust bibliometric indicators to assess and track interdisciplinary work. But before we go charging off into that world, let’s take a brief look first at the philosophical underpinnings of the taxonomies of science on which interdisciplinary metrics are built.

Caroline Wagner & co. give a great overview of the topic in their 2011 review of the literature. There’s even a nice history lesson about disciplinary divisions coalescing in the 19th Century and becoming solidified hurdles to collaboration by the middle of the 20th Century (while noting a persistence of at least some individuals who refuse to be pigeonholed). Main drivers of this segmentation included the complexity of scientific topics, as well as the practical need to divide organizational structures and funding into manageable chunks.

Remnants of an old, intellectually integrated world have a nostalgic and quaint feeling about them. Take philosopher Ernst Cassirer for instance, who until his death in 1945 seemed to keep up private correspondence with just about everybody who was interesting, running the full gamut from physicist Albert Einstein to art historian Aby Warburg to gestalt psychologist Kurt Goldstein. Each of these personal acquaintances importantly influenced his philosophy of symbolic forms, which is basically a philosophy of everything—the kind of grand, sweeping intellectual scope that is out of fashion these days, and for which some still wistfully long. Was Cassirer the last Renaissance Man?

But coming back to the practical matter of bibliometric measures of disciplinarity and interdisciplinarity, what options are currently out there? There are two main approaches to the classification of science: top-down and bottom-up. (How original, I know.) Top-down approaches are usually journal-based classifications, built from a taxonomy that’s stipulated from the outset for a given research project. Examples of this kind of classification are often based on the ISI classification that has been common currency for decades and has a high degree of intuitive appeal, clearly mapping touchstones of science such as physics, chemistry, and mathematics. Bottom-up classifications are usually constructed on relations between individual articles. Variants include classifications built on citation networks, semantic similarities between content, and others.

The appeal of top-down classifications is that they map onto our intuitive concepts, and for policy purposes this makes the meaning of findings easier to interpret. They also provide the stability to collect data over time for longitudinal comparisons, including across studies. Their drawback is that they are challenged by journals that don’t fit nicely into a single category, as well as by disciplinary evolution and drift. The appeal of bottom-up classifications is that they track evolution and drift reasonably well. But their drawback is that they often fail to map our intuitive categories, and this makes the extraction of meaningful, policy-relevant conclusions quite difficult, especially over time.

What do we want out of an ideal taxonomy of science? At the very least, in addition to whatever other desiderate we might have, we probably want it to (1) track the realities of an evolving scientific research landscape in a reproducible way, and (2) connect in some fashion to the policy context in question so that it can form the evidential basis for decisions about science policy.

It seems sometimes like desiderata (1) and (2) above are the desire to have one’s cake and eat it too. We want our classification to tell us how things “really stand, in themselves,” but also for the findings to be relevant to us. We want the world to answer on its own terms, but we also won’t be happy unless our terms are satisfied too. Why do we want these things? Are we at once acknowledging, with Wagner et al. above, that policy and other practical matters had and continue to have an important role in shaping the course of science, and yet hold out hope that a classification of science will reflect an undisturbed, “objective” world of scientific inquiry, as though our peeking in on that world somehow weren’t having profound effects on it?

My suggestion is this: If we accept that the world of policy has effects on the course of scientific research, contributing to the sculpting of the disciplines, then we shouldn’t be shy about involving that policy context in our mapping of scientific disciplines. Using a seed of categorized documents and expanding that classification scheme to encompass an entire bibliometric database is a much less problem-ridden process than tossing a few million articles into the hopper and boiling off the water until only the “intrinsic” structure remains sedimented at the bottom. Perhaps what we should be looking for is not a consistent taxonomy to apply at all places and in all times (because even if science were the same all over, science policy certainly isn’t), but rather a consistent method to build such taxonomies on the basis of some input from the science side and some from the policy side.

Such an approach would be flexible enough to track scientific evolution and drift, and to connect to the policy needs of the day. And all we need to do is accept that there is no perspective-free perspective from which we ask these questions.


Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.


About the author

Brooke Struck

Brooke Struck is the Senior Policy Officer at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

Rationalizing the extremes: introducing the citation distribution index

The distribution of citations among the scientific...

Read more

1findr: discovery for the world of research

As of last week, 1science is offering public acces...

Read more

Positional analysis: from boring tables to sweet visuals

At Science-Metrix we are obviously very focused on...

Read more

There are 0 comments