Mapping science: a guide to our Twitter series
March 14, 2018
, , , , , , , , ,

Over the course of 2018, we’ll be publishing a series of maps via the Science-Metrix Twitter feed to visually illustrate some dynamics of the global science ecosystem. This blog post is the anchor for that series, explaining why we think these maps are important and what exactly they represent.

Monoliths we aren’t

When talking about research from China, discussants often sort themselves into two groups—the deflectors claiming that “they” steal intellectual property, and the acceptors claiming that at least “they” still believe in science and are taking action on climate change. By comparison, one might infer that “we” do not steal intellectual property, that “we” don’t believe in science, that “we” aren’t taking serious climate action, and so forth.

Those claims pave over the very real differences among us. Some people steal intellectual property, some people have faith in science, and some people take real climate action. Some people do all of these things, some do none of them, and some do a few but not all. In short, these claims paint a picture of countries as homogeneous monoliths, even though we know the real world is full of nuance. The growing awareness and measurement of inequality provides us with valuable illustrations of those differences, both within the research sphere and in society more widely. Breaking down these monolithic conceptions can help us to see distinctions that are both real and really important.

maps, scientific output, China, citation count, bibliometrics, scientific production map

Scientific publication output in China, by province (2010–2016): not a picture of a homogeneous monolith after all!

Here be dragons

Our series of visuals aims to bring some nuance back into these discussions, at least so far as science is concerned. But how can maps help us here? Maps are more complicated than they often appear. Old maps often strike us as quaint, what with their grotesque proportions, their religious figures watching over us from the upper margins, and especially their images of monsters around the point where our knowledge starts to blur—visual encodings of our fear of the unknown. Maps drawn from memory show how much we focus on and remember some areas but not others. Even the maps on classroom walls suggest quite incorrect conclusions, like Greenland being roughly as large as Africa, even though Africa is an order of magnitude larger!

These examples illustrate the point: our perceptions shape our maps, just as our maps shape our perceptions. Maps don’t just represent space; they construct it.

Drill here

How does all this relate to our Twitter series? Acknowledging that maps are social constructions of space, we thought it would be interesting to roll out a series of maps that drill down into regional bibliometric data. The internal differences these maps turn up will hopefully add a bit of nuance to our conception of the global research ecosystem.

To that end, our map series will cover the United States (data aggregated by state) and China (data by province), as well as Europe, Africa and Latin America (data by country, in each case). The indicators that we will cover include number of papers published, growth in output, citation impact, share of papers available in open access, share of papers published in international collaboration, share of women among authors, and thematic specialization. The maps will cover research overall as well as broken down by large thematic areas, following our taxonomy of science developed in-house at Science-Metrix.

A space for critique

Power dynamics lurk around every corner here. Maps are not neutral media, and the data that we choose to present and the way that we choose to present them have an effect on the message that these maps convey. A few caveats about the bibliometric data therefore seem important:

  • Bibliometric databases have language biases. In the cases of Web of Science (used for this series) and Scopus (another source we often use, but not this time), papers are only indexed if their title, abstract and journal title are available in English. Even if we accept English as the lingua franca of international research, that would still mean that what’s covered here is international research plus the research of English-speaking countries. That means the data are measuring different phenomena for different places.
  • English also plays a different role in different areas of research. For instance, international journals are more central in areas of the natural sciences than they are in the social sciences, where publishing on local issues in local journals in local languages is an important contribution to scholarship.
  • Bibliometric databases cover peer-reviewed journals, but even the role of journals varies across research areas. Books, for instance, remain a central medium for communicating research in the humanities, but neither the books themselves nor citations to those books are captured in bibliometric analysis.

These are just a few caveats, and they only concern the data. We haven’t even said anything about the selection of indicators, the way that they are computed, the way that they are laid out visually, or a whole host of other decisions that go into making maps. We encourage comments and discussion critiquing this project and suggesting alternative paths we might explore. (Unfortunately, we aren’t in a position at this time to offer online tools for users to generate their own maps.)

Nonetheless, we hope that this series will provide some glimpses of what’s going on in the (research) world in corners that we don’t often see visualized. Such glimpses can help to dissolve the monolithic images many of us have in mind, giving us a chance to engage with nuances about research that we might never otherwise see. And if nothing else, it’s just fun to visualize a bunch of different data and see what cool findings emerge!


Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.


About the author

Brooke Struck

Brooke Struck is the Senior Policy Officer at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

Positional analysis: from boring tables to sweet visuals

At Science-Metrix we are obviously very focused on...

Read more

Bibliometric fun facts

Bibliometrics is a complex and nuanced field, and ...

Read more

2017: the best of

Over the past year, the blog ha...

Read more

There are 0 comments