Bibliometrics
Team diversity widget: how do you measure up?
December 6, 2017
0
, , , ,

Collaboration and disciplinary diversity are hot topics in the science policy and research communities. At Science-Metrix, we’ve been working on multi-/inter-/trans-disciplinarity issues for a while now, and we figured that some of you might find it useful to have a tool you can use to take a quick measurement of the multidisciplinarity of your team. As part of our 15th anniversary celebrations, we’ve created a free widget that we’re sharing for just such a purpose. Any team can be measured—your research team in academia, a product team in your company, or even your Wednesday-night hockey team. In this post, we’ll explain what the disciplinarity widget does, how to interpret the measurements, and how you can use it yourself. We hope you enjoy the widget—a little birthday gift from us to you!

How is disciplinary diversity measured?

For several years, Science-Metrix has maintained a classification of research into a three-level taxonomy, arranging research into domains, fields and subfields. We have also developed several approaches to assess the conceptual proximity of these subfields to each other, based on how often material from these subfields is used in combination. With the taxonomy in hand, and a proximity matrix relating the subfields to each other, we can calculate disciplinary mix using a three-dimensional approach. The first dimension is simply the number of different subfields integrated, the second dimension is the balance between the subfields being represented, and the third dimension is the conceptual distance between them.

For example, a team that consists of five biologists and one chemist is considered less diverse than a team of three biologists and three chemists, because the latter team is more balanced between the subfields involved. Similarly, a team with five biologists and one chemist is considered less diverse than a team with five biologists and one performing artist, because biology and chemistry are conceptually more proximate to each other than are biology and the performing arts.

What do the results mean?

In working with this measure of disciplinary diversity, the pattern that has emerged is that the first dimension to increase is the number of subfields represented. That is to say, from a completely disciplinary team, the first increase in score will be to involve some representation from additional areas. The next increase will come from an increasing balance between the disciplines represented, as was illustrated through the example of biologists and chemists above. Finally, the next increase in scores will come from a growing intellectual distance between the subfields being integrated, as was illustrated with the example of biologists and a performing artist above.

Scores on this indicator range from 0 to 1, 0 being completely monodisciplinary and 1 being maximally diverse. The ranges of these scores can be interpreted as follows:

  • 0 means totally disciplinary, everyone from the same background.
  • 0.1–0.2 is a low score, meaning that there is one “home” discipline with a few “secondary” areas also included.
  • 0.3–0.5 is a mid-range score, meaning that there is a balance amongst the disciplines represented but that they’re all still quite clustered in one intellectual area.
  • 0.6 and above is a high score, meaning that several different disciplines are involved, they’re relative balanced (rather than a “home and guest” model), and they’re drawn from a broad intellectual diversity.

How did Science-Metrix measure up?

In line with the long and colourful tradition of researchers experimenting on themselves, the first guinea pig for this widget was none other than the Science-Metrix team. We inputted information for each staff member, including the bibliometrics and evaluation services teams as well as the shared staff that support us all, a total of about 20 people all told. Our score was 0.8, a pleasantly surprising finding. Here’s a list of all the subfields that are represented within our team:

  • Biochemistry & Molecular Biology
  • Classics
  • Electrical & Electronic Engineering
  • Evolutionary Biology
  • Experimental Psychology
  • Fluids & Plasmas
  • Languages & Linguistics
  • Literary Studies
  • Meteorology & Atmospheric Sciences
  • Nutrition & Dietetics
  • Optics
  • Philosophy
  • Public Health
  • Social Sciences Methods
  • Sociology
  • Strategic, Defence & Security Studies
  • Urban & Regional Planning

How do you use the widget?

Using the widget is (intended to be) very simple. What we need in order to generate a measurement is just for each of your team members to be tagged for the relevant subfield that they represent. In order to collect this information, the widget asks you to name your team and identify the sector in which you’re working, and it then presents you with a menu to navigate through our three-level taxonomy and identify your subfield. Once you’ve inputted your own information, a link is provided for information to be provided about your teammates as well—a link that you can send to your teammates, or that you yourself can click through in order to enter information on their behalf.

How do you know what subfield to associate yourself with? At Science-Metrix, we generally used the highest level of education (or the degree most recently completed, as a tiebreaker). However, in some cases, people had ventured into completely new intellectual areas since finishing their studies, so it was perhaps more relevant for them to identify their new area of self-appointed expertise rather than tying them to credentials that no longer fit their areas of focus. For now, each person can only choose one subfield to represent themselves in the measurement. If you find this particularly constraining, let us know, as allowing multiple subfields per person is a feature we can consider building if this challenge is widespread.

Feel free to share your results on social media, with your colleagues, and so forth. Healthy competition is of course always encouraged! Also feel free to send us questions or comments through our social media accounts, as we’re hoping to refine this widget and add any new features that would make it a tool that provides enduring value for the community.

Collecting widget data

Also, please note that we’ll be collecting anonymized data through the widget. Other than the urgent social imperative to always create more data (even—and sometimes especially—in cases where the value of that data is unclear), we’re collecting these data for a few reasons. First, we’re generally just very curious to see how diverse the teams are out there, what kinds of disciplinary combinations might crop up, and so forth. Second, we’re hoping that—with enough users inputting information for broader patterns to emerge—we’ll have some neat findings that we can share on social media; these findings will hopefully increase value for users by adding nuance to the understanding that they get about their own team by using this tool.

What data will we collect, and how do we intend to analyze them? The main data are the disciplinary diversity scores, which we’ll be able to slice by sector (inputted manually) and by geographic area (collected via the IP address stamped on the submission). We’ve left a space for you to input your email if you’d like to receive updates about new features built into the widget, and findings that we’ve uncovered looking at patterns in the inputted data. We designed the widget to log your email independently from your team’s data.

Click here to access the disciplinarity widget.

 

Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

1

About the author

Brooke Struck

Brooke Struck works as a policy analyst at Science-Metrix in Montreal, where he puts his background in philosophy of science to good use in helping policy types and technical types to understand each other a little better every day. He also takes gleeful pleasure in unearthing our shared but buried assumptions, and generally gadfly-ing everyone in his proximity. He is interested in policy for science as well as science for policy (i.e., evidence-based decision-making), and is progressively integrating himself into the development of new bibliometric indicators at Science-Metrix to address emerging policy priorities. Before working at Science-Metrix, Brooke worked for the Canadian Federal Government. He holds a PhD in philosophy from the University of Guelph and a BA with honours in philosophy from McGill University.

Related items

/ You may check this items as well

Metrics: state of the alt

Discussions of research having impact were for a l...

Read more

The death of indicators

In last week’s post, I presented some of the maj...

Read more

Indicating a neoliberal tendency

Continuing on from my previous discussion of impac...

Read more

There are 0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *