Evaluation
Program evaluation basics
February 14, 2018
0
, , ,

The ScienceMetrics blog has so far focused on our scientometric, data mining and science policy activities, but we also have a long history of conducting evaluations of S&T-related programs and initiatives. In my opinion, the most fun to be had on the job is when we team up to combine our quantitative and qualitative analysis skills on a project. To kick off a series of posts from the evaluation side of Science-Metrix, in this post I’ll present an introductory Q&A on program evaluation, and next week I’ll discuss how to maximize the use of evaluation findings. Read on for the what, why, when and how of program evaluation.

I’ll begin by noting that all answers presented here are rooted in the OECD Glossary of Key Terms in Evaluation. More comprehensive explanations, especially for a Canadian audience, can also be found on the Government of Canada’s website for the Centre of Excellence for Evaluation.

So, what is program evaluation?

Evaluation is a process of collecting data to assess the value of a funded initiative. According to the OECD, evaluation is the “systematic and objective assessment of an on-going or completed project, programme or policy, its design, implementation and results.” You can evaluate a project, activity, program or policy, and this intervention can be either publicly or privately funded.

Why do you do it?

A well-planned evaluation should provide credible information to both funders, program managers and program beneficiaries, to help make decisions about the intervention. Evaluation is useful to

  • identify areas for improvement,
  • examine or benchmark performance against standards,
  • measure expected against actual results, and
  • support accountability through public reporting of results.

When do you conduct an evaluation?

That depends on your goal. An ex-post evaluation looks at a completed intervention, often with the goal to assess factors of success and failure, to assess the sustainability of results and impacts, and to draw conclusions that may inform other interventions.

An ex-ante evaluation is performed while an intervention is being developed, often to define intervention objectives and to check the potential of a project or program to deliver proposed benefits.

A formative evaluation is often conducted during the implementation phase of an ongoing intervention, usually with the intent to improve performance.

evaluation, program evaluation, evaluation approach, Science-Metrix

How do you conduct an evaluation?

There are many approaches to conducting an evaluation, and a good list can be found here. At Science-Metrix we often break down the process into three phases as shown in the figure below.

To start, the evaluation scope must be defined. In the most basic of terms, this means setting out the questions to be answered by the evaluation, the time period that the evaluation will cover and sometimes, at this stage, the proposed methods to collect data. Next, a theory is developed to describe a causal chain linking the activities and outputs of the intervention to its intended outcomes. This sequence is underpinned by assumptions, risks and context that together explain how intended results are expected to occur.

To visualize this, a logic model is created to depict the intervention’s main activities, outputs and outcomes, and the logical relationships between them. The last step in the design is to explicitly link the evaluation questions to methods or lines of evidence that will enable you to answer those questions. A data collection matrix is a tool used for this purpose, to ensure the data you collect is not superfluous but rather addresses specific evaluation questions or indicators.

Data collection occurs during the implementation phase. Often a mix of quantitative and qualitative methods are used—for example, interviews, surveys, case studies, focus groups, literature search, cost-benefit analyses, counterfactual analyses and bibliometric analyses.

Data is analyzed separately for each line of evidence and is then triangulated, which will lead to key findings. Findings are validated (sometimes by going back to the source material) and should ideally be supported by more than one line of evidence. Building on the key findings, a final set of recommendations or lessons is produced for the organization being evaluated.

And then what?

An evaluator’s work often ends once the final evaluation report is handed over to the client. But the actual use of the evaluation results, and the role of the evaluator in that process, is a topic of contention within the literature. Stay tuned for our next blog post where we’ll take a closer look at what it takes for evaluation results to find their way off the page and into practice.

 

Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

0

About the author

Andréa Ventimiglia

Andréa Ventimiglia has been with Science-Metrix as an Evaluator for seven years and counting. With around 50 evaluations, performance measurement analyses and bibliometric studies under her belt, she is happiest working on projects where her analytical dexterity acts as a complement to the greater Science-Metrix skill set. Andréa took a winding path towards evaluation, with a Masters in Journalism from Carleton University and a B.Sc. in Biology from the University of Waterloo.

Related items

/ You may check this items as well

Contribution analysis: How did the program make a difference – or did it?

Contribution analysis is an evaluation method that...

Read more

Maximizing the use of evaluation findings

In a 2006 survey of 1,140 American Evaluation Asso...

Read more

There are 0 comments