Bibliometrics
Rationalizing the extremes: introducing the citation distribution index
May 10, 2018
0
, , ,

The distribution of citations among the scientific literature is in many respects similar to the distribution of wealth in the Western World: a handful of articles receive most of the citations, while most of the articles receive few or no citations at all. The distribution of citations is indeed highly skewed and not well represented by its average (the so-called “Bill Gates effect” in the wealth distribution analogy). In fact, when the average is computed for entities that publish a small number of articles, a few highly cited articles could be enough to propel these entities into research superstardom. In this post, we’ll look at an alternative metric Science-Metrix has developed to address the limitations of averages, as well as some of the other metrics we explored to get to that point. (more…)

Bibliometrics Open access
1findr: discovery for the world of research
May 2, 2018
2
, , ,

As of last week, 1science is offering public access to its 1findr service. 1findr is a discovery and analytics platform for scholarly research, indexing an incredibly wide breadth of peer-reviewed journals. But just how broad is its coverage, and how does 1findr compare to alternative systems? In this post, we’ll measure up 1findr against the (also quite new) Dimensions platform from Digital Science. These two platforms represent new approaches to bibliographic data: 1findr is fed using harvesters that automatically collect, parse, complete and validate metadata from information online, whereas Dimensions aggregates and cross-links data from a variety of sources, accessed through institutional partnerships.

(more…)

Bibliometrics
Positional analysis: from boring tables to sweet visuals
March 28, 2018
5
, , , , , , ,

At Science-Metrix we are obviously very focused on data—sweet, sweet data! We are also very aware that bibliometric data or pages and pages of analysis can be overwhelming and that more intuitive data presentation can help our clients to better understand their study results, which in turn helps them to take action on the findings we return to them. One graphic presentation we find particularly helpful is the positional analysis chart. Positional analysis is a way to visually depict two, three or even more indicators for a given set of entities instead of using a standard (and boring) table. Here’s how it works.

(more…)

Science policy
Impact assessment stories: decisions, decisions
March 21, 2018
4
, , , ,

It appears that the research & innovation policy community is not the only one struggling with demonstrations of societal benefit. In recent months, the federal Liberal government unveiled two online initiatives to increase government transparency, sharing information about government activities and outcomes. The challenges that these two platforms face amply demonstrate the difficulty of impact assessment. Those challenges are the same ones that face the science policy community, and this post explains how the shortcomings of these online platforms might help to elucidate some potential solutions. (more…)

Bibliometrics
Mapping science: a guide to our Twitter series
March 14, 2018
0
, , , , , , , , ,

Over the course of 2018, we’ll be publishing a series of maps via the Science-Metrix Twitter feed to visually illustrate some dynamics of the global science ecosystem. This blog post is the anchor for that series, explaining why we think these maps are important and what exactly they represent.

(more…)

Science policy
Budget 2018: The Evidence Budget
March 8, 2018
1
, , , ,

In our post last week on the 2018–19 Canadian federal budget, we looked at how the new spending on fundamental research addresses the calls for support from the Naylor report. But there were many more science stories in the budget as well. Beyond the dollar figures, there are important—if tacit—signals in the budget document about another key item from the science file in Canada: using evidence to build policy. Today’s post attempts to decipher those tacit signals. (more…)

Evaluation
Contribution analysis: How did the program make a difference – or did it?
March 6, 2018
3
, , , , ,

Contribution analysis is an evaluation method that was developed by John Mayne in the early 2000s to enable evaluators to produce rigorous impact analyses in the context of programs that cannot be evaluated using an experimental or quasi-experimental design. While the Science-Metrix Evaluation team has been conducting contribution analyses informally for a while now, the workshop conducted by Thomas Delahais (of Quadrant Conseil) for the SQEP annual conference in October 2017 inspired us to use this technique in a more rigorous, systematic and comprehensive way. This method appeared to us as perfectly suited to the kinds of evaluations we, and probably many of you, carry out—that is, evaluations of complex, multifaceted programs for which counterfactuals cannot easily be used. In this post, I’ll give a brief synopsis of the method and provide suggestions for further reading. (more…)

Science policy
Budget 2018: the fundamental question of research funding
February 28, 2018
3
, , , ,

Science has been quite prominent on the Canadian political radar in recent years, and even became a regular talking point during the last federal election in 2015. During that campaign, the current Liberal government made four headline promises, and with the release yesterday of the 2018–19 federal budget, one of the key puzzle pieces fell into place: increased funding for fundamental research. In today’s post, we’ll assess the budget’s meaning for science in Canada.

(more…)

Evaluation
Maximizing the use of evaluation findings
February 21, 2018
2
, , ,

In a 2006 survey of 1,140 American Evaluation Association members, 68% reported that their evaluation results were not used. This suggests a serious need for evaluation results to make it off the bookshelf and into the hands of intended audiences. This week’s blog post looks at how we, as evaluators, can help maximize the use of the evaluations we produce. (more…)

Evaluation
Program evaluation basics
February 14, 2018
0
, , ,

The ScienceMetrics blog has so far focused on our scientometric, data mining and science policy activities, but we also have a long history of conducting evaluations of S&T-related programs and initiatives. In my opinion, the most fun to be had on the job is when we team up to combine our quantitative and qualitative analysis skills on a project. To kick off a series of posts from the evaluation side of Science-Metrix, in this post I’ll present an introductory Q&A on program evaluation, and next week I’ll discuss how to maximize the use of evaluation findings. Read on for the what, why, when and how of program evaluation. (more…)