Bibliometrics
Positional analysis: from boring tables to sweet visuals
March 28, 2018
5
, , , , , , ,

At Science-Metrix we are obviously very focused on data—sweet, sweet data! We are also very aware that bibliometric data or pages and pages of analysis can be overwhelming and that more intuitive data presentation can help our clients to better understand their study results, which in turn helps them to take action on the findings we return to them. One graphic presentation we find particularly helpful is the positional analysis chart. Positional analysis is a way to visually depict two, three or even more indicators for a given set of entities instead of using a standard (and boring) table. Here’s how it works.

(more…)

Science policy
Impact assessment stories: decisions, decisions
March 21, 2018
4
, , , ,

It appears that the research & innovation policy community is not the only one struggling with demonstrations of societal benefit. In recent months, the federal Liberal government unveiled two online initiatives to increase government transparency, sharing information about government activities and outcomes. The challenges that these two platforms face amply demonstrate the difficulty of impact assessment. Those challenges are the same ones that face the science policy community, and this post explains how the shortcomings of these online platforms might help to elucidate some potential solutions. (more…)

Bibliometrics
Mapping science: a guide to our Twitter series
March 14, 2018
0
, , , , , , , , ,

Over the course of 2018, we’ll be publishing a series of maps via the Science-Metrix Twitter feed to visually illustrate some dynamics of the global science ecosystem. This blog post is the anchor for that series, explaining why we think these maps are important and what exactly they represent.

(more…)

Science policy
Budget 2018: The Evidence Budget
March 8, 2018
1
, , , ,

In our post last week on the 2018–19 Canadian federal budget, we looked at how the new spending on fundamental research addresses the calls for support from the Naylor report. But there were many more science stories in the budget as well. Beyond the dollar figures, there are important—if tacit—signals in the budget document about another key item from the science file in Canada: using evidence to build policy. Today’s post attempts to decipher those tacit signals. (more…)

Evaluation
Contribution analysis: How did the program make a difference – or did it?
March 6, 2018
3
, , , , ,

Contribution analysis is an evaluation method that was developed by John Mayne in the early 2000s to enable evaluators to produce rigorous impact analyses in the context of programs that cannot be evaluated using an experimental or quasi-experimental design. While the Science-Metrix Evaluation team has been conducting contribution analyses informally for a while now, the workshop conducted by Thomas Delahais (of Quadrant Conseil) for the SQEP annual conference in October 2017 inspired us to use this technique in a more rigorous, systematic and comprehensive way. This method appeared to us as perfectly suited to the kinds of evaluations we, and probably many of you, carry out—that is, evaluations of complex, multifaceted programs for which counterfactuals cannot easily be used. In this post, I’ll give a brief synopsis of the method and provide suggestions for further reading. (more…)

Science policy
Budget 2018: the fundamental question of research funding
February 28, 2018
3
, , , ,

Science has been quite prominent on the Canadian political radar in recent years, and even became a regular talking point during the last federal election in 2015. During that campaign, the current Liberal government made four headline promises, and with the release yesterday of the 2018–19 federal budget, one of the key puzzle pieces fell into place: increased funding for fundamental research. In today’s post, we’ll assess the budget’s meaning for science in Canada.

(more…)

Evaluation
Maximizing the use of evaluation findings
February 21, 2018
2
, , ,

In a 2006 survey of 1,140 American Evaluation Association members, 68% reported that their evaluation results were not used. This suggests a serious need for evaluation results to make it off the bookshelf and into the hands of intended audiences. This week’s blog post looks at how we, as evaluators, can help maximize the use of the evaluations we produce. (more…)

Evaluation
Program evaluation basics
February 14, 2018
0
, , ,

The ScienceMetrics blog has so far focused on our scientometric, data mining and science policy activities, but we also have a long history of conducting evaluations of S&T-related programs and initiatives. In my opinion, the most fun to be had on the job is when we team up to combine our quantitative and qualitative analysis skills on a project. To kick off a series of posts from the evaluation side of Science-Metrix, in this post I’ll present an introductory Q&A on program evaluation, and next week I’ll discuss how to maximize the use of evaluation findings. Read on for the what, why, when and how of program evaluation. (more…)

Science policy
Science advice in Canada: reborn again?
February 7, 2018
1
, , ,

Science advice is apparently having a “moment” right now in Canada. Quebec has had a chief scientist since 2011, but both the federal government and the Ontario provincial government named chief science advisors in the second half of 2017. For the first time, at the end of January 2018, the three chief scientists appeared publicly together, on a panel organized by the Institute for Science, Society and Policy (ISSP) at the University of Ottawa. This attracted no small number of science policy nerds. While the event was billed as a new renaissance for science advice, that does raise the question of when exactly its prior incarnations occurred. In today’s post, we’ll present a summary of the discussions, along with some critical reflections. (more…)

Bibliometrics
Bibliometric fun facts
January 24, 2018
2
, , , , , ,

Bibliometrics is a complex and nuanced field, and at times, we’ll admit, it’s even a little arcane. At Science-Metrix we take great pride in explaining indicators to clients: what they mean, how they are calculated, their strengths and weaknesses, and what they can and cannot do. As with so many things, examples provide a good entry point. Inspired by this, today I begin a new occasional series that heads back to basics to explain some common indicators and how we use them in house. In this first post, I’ll explore some “fun facts” that can be derived from a bibliometric database.

(more…)