Discussions of research having impact were for a long time limited to citation tracking, to estimate how much one piece of work influences subsequent explorations. However, with calls for research to have impact on the broader society—breaking out of the closed circle of research feeding yet more research—there’s a lot of interest in seeing how we might trace that impact pathway as it breaks through the membrane insulating the world of research. Altmetrics has held the promise of tracking just such traces. At STI 2017, several leading researchers on the topic gave valuable updates on the state of the art, and their estimation is that we should be seriously cooling it with all the hype. This post sums up the points that stuck out to me from their various presentations, and tries to outline my takeaway of what we should be learning from altmetrics.
What are altmetrics, anyway? The definition—which is still a matter of discussion among experts—is at its broadest any alternative to metrics built on bibliographic databases, and this negative definition of course enables it to regroup nearly anything under the sun. However, a more positive but constrained definition is that altmetrics regroups all research metrics built on web data, or maybe even social media data specifically. This definition issue may seem trivial, but when it comes to the social perception of these indicators, grouping them effectively is important because they are likely to all stand or fall together. Perceptions of validity will be determined by the way these indicators are aggregated, so if we group dramatically heterogeneous indicators under this single umbrella, we’re setting ourselves up for a greater challenge later when it comes to collaborating with users to implement these indicators.
In general, Twitter is the single most important source of data used for altmetrics, as it is the source where the signal is the strongest (by a very wide margin) and potentially meaningful findings have a reasonable chance of being found. That fact in itself is revealing to me of how often research gets taken up in social media: there’s only one platform on which it gets used enough to even consider implementing quantitative approaches to measurement. That means that platforms such as Facebook and YouTube—which have substantive influence on the views of society—are areas where the presence of research is basically negligible. In terms of assessing impact on society, that finding strikes me as highly relevant.
Who links research on social media?
On Twitter, research has a larger presence. Let’s start dissecting that: who are the Twitter users that engage most often with research? The principal users are organizational accounts, rather than personal ones. Furthermore, many of these organizations are the journals or publishers responsible for issuing the papers they’re discussing. Measuring this activity quantifies research being pushed out into the world, not the world’s pull on research. Using these data to assess wider societal impact of research would be like measuring customer appetite for a company’s products by counting the number of billboards around town that have the company’s name on them—the customers aren’t the ones who put the billboards there!
One qualitative analysis from STI 2017 identified three predominant types of tweeting behaviour:
- Single-issue advocates: one account, hammering away on a single topic, linking the same research paper over and over again, often using exactly the same tweet text each time
- Social media managers: several organizational accounts, pushing the same paper in a campaign with concerted timing, often using identical tweet texts (or very minor variations)
- Broader engagement: a diversity of user accounts, a diversity of tweet texts, linking to research papers, and often using hashtags to connect the work to broader ongoing debates on the wider platform
What’s the value of a “like”?
This assessment, along with others, raises serious questions about the possibility of simply counting social media mentions as any reliable indication of uptake. The lesson that speakers drew from this conclusion is that we have so far treated altmetrics too much like bibliometrics, assuming that social media links and bibliographic citations are relatively interchangeable and therefore only need to be counted. They urged us to treat altmetrics differently, exploring the qualitative dimensions of links/mentions as well. My own conclusion is that this revelation should make us much more circumspect about bibliometrics too.
Far from treating altmetrics like bibliometrics, we might consider treating bibliometrics like we treat altmetrics—not taking for granted that all bibliographic citations are equal. These aren’t the early days of bibliometrics anymore; we’re not tracing citations one box of cue cards at a time. With the data and computing power now at our disposal, we no longer need to rest satisfied with simple citation counts. We have the tools to start exploring types of citation behaviour, reflecting critically on their meaning and value. Analogous explorations in altmetrics suggest that very few mentions of previous research actually reflect constructive development built upon the work of others.
Some other similarities and differences are worth highlighting. In the peer-reviewed literature, authors often put their ideas out there in a journal, but then are rather passive. They may present work at conferences and discuss it over coffee in the department, but they don’t engage in anything like a coordinated marketing campaign to engage the users of their research. When it comes to engaging the general public, the challenge the indicators community is facing right now is not primarily one of measuring uptake that we know is there; we’re challenged by the fact that a lot of research simply never makes its way out into society, and much of what society does is simply unaffected by research in any kind of direct way. And yet social media strategies for promoting the uptake of research are inspired by behaviour in the academic sphere: simply put it out there and they will come to drink at the waters of knowledge.
Market like a pro!
The competition for people’s attention is enormous. The research community is competing with powerful and well-coordinated economic interests, who rely on the attention of the general public in order to sell their products, fuel their businesses and put food on their tables. And these interests have been putting serious efforts—scientifically, over the course of decades—into retaining that attention. That’s very stiff competition for the research community to tangle with. The research community could learn a lot from professional marketers about how to effectively engage an audience and build up a discussion with them. And the indicators community could learn a lot from professional marketers about how to effectively assess the quality of engagement with an audience.
Altmetrics does not even consider the timing of tweets, even though we know that timing is a crucial dimension in the prioritization algorithms that determine which content actually gets seen by users and what gets buried in the depths of newsfeed oblivion. Altmetrics doesn’t explicitly track responses, questions and answers that reflect a deepening online conversation. These back-and-forth exchanges are key moments for knowledge to be shared, from researchers to the broader community and vice versa. At the very least (taking a very conservative view of what counts as knowledge, and avoiding for a moment the question of co-creation), it seems unproblematic to conceive of researchers learning about the needs of their users from this exchange.
Finally, it is important for us to acknowledge that bibliographic citations and social media mentions are two different types of currency. We should be reflecting on how each of those currencies is actually relevant to research. For a long time, the meaning of citations was assumed to be an unproblematic indication of intellectual debt, an assumption that we should now revisit. The meaning of tweets has been shown to be highly problematic. If tweets had ended up being all from individual users plugging research papers into ongoing societal discussions, we probably would have concluded that this represents “broader impact”—but we know that this is not the case.
How might altmetrics shape research?
Perhaps moving to a co-creation model of knowledge would help to solve our impact and our marketing problems with a single stroke, and our measurement problems as well. At its most ambitious, altmetrics can thus not only offer us a new plan for measurement, but has embedded within it a new plan for impact and for research itself.
Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.