Evaluation
Maximizing the use of evaluation findings
February 21, 2018
2
, , ,

In a 2006 survey of 1,140 American Evaluation Association members, 68% reported that their evaluation results were not used. This suggests a serious need for evaluation results to make it off the bookshelf and into the hands of intended audiences. This week’s blog post looks at how we, as evaluators, can help maximize the use of the evaluations we produce.

Noted evaluation scholar Michael Quinn Patton suggests that evaluators not only design evaluations with careful consideration of end use but should then be involved in real-world application of evaluation findings. In his view, this includes a follow-up plan (and budget) with primary users of the report for the proactive pursuit of utilization. While not all projects may include the resources to go so far, certain actions taken during the evaluation can facilitate use, once the evaluation contract has ended.

First though, let’s recognize that “use” can also encompass many types of actions; according to the literature, there are four broad categories of evaluation utilization.

  1. Instrumental use refers to situations where an evaluation directly affects decision-making and influences changes in the program under review.
  2. Conceptual use occurs when knowledge or insights from an evaluation influence the way in which stakeholders think about a program, without any immediate new decisions being made. Over time and given changes to the contextual and political circumstances surrounding the program, conceptual impacts may lead to concrete changes.
  3. Symbolic use involves the justification of decisions already made about a program. For example, the evaluation is commissioned after decision-making and provides a mechanism for retrospectively justifying decisions made on other grounds.
  4. Process use concerns the impact of the evaluation on those who participated in it. Being involved in an evaluation (directly or indirectly) may lead to changes in the thoughts and behaviours of those individuals, which then results in cultural or organizational change.

While we’re aiming for instrumental or conceptual use, arguably any uptake of evaluation findings is better than none. Some easy-to-execute steps before, during and after the evaluation can help make that happen.

Before: Most sources agree that the primary audience for the evaluation and their intentions for use must be considered in the evaluation design. This will help guide both the format of the final product (e.g., summary PowerPoint for executives vs. detailed report for program managers) as well as its dissemination. Furthermore, consider your evaluand’s prior experience with evaluation. If they are new to the process, take the time to ensure they understand the basics of evaluation, the data collection methods you propose and how the evaluation is intended to benefit the program stakeholders.

During: Engaging users throughout the evaluation can also be valuable to gather their feedback on the interpretation of data, to have them review interim findings and to seek their opinion in developing potential recommendations. These actions strengthen buy-in from the evaluand, who may then be more likely to act on recommendations.

After: Once your evaluation results are finalized, consider how the information can be presented and what kind of reporting format will best suit your user and their intended purposes. If your timeline and budget allow, strategize with intended users about creative ways to report findings that may enhance their utility. For example, the main report could be complemented by a one-page handout focusing only on the user’s priority themes, to help disseminate these key findings. Here are some alternatives for evaluation reporting.

Sometimes evaluation findings can also be helpful to other evaluators or project staff working in the same field. With the consent of your users, take some time once your evaluation ends to remodel some of the findings into articles or stories to share in journals, conference presentations, blogs and the like.

A last item that we don’t often think about as evaluators, but that could make the difference between neglect and use, is being prepared to help users navigate findings that don’t jive with their perspectives. For example, a program may be reluctant to report and act on “negative” evaluation findings. To compensate, draw on your data to clarify how you came to your conclusion, frame the results as lessons learned, and highlight what did work and why so that similar errors can be avoided in the future.

 

Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.

0

About the author

Andréa Ventimiglia

Andréa Ventimiglia has been with Science-Metrix as an Evaluator for seven years and counting. With around 50 evaluations, performance measurement analyses and bibliometric studies under her belt, she is happiest working on projects where her analytical dexterity acts as a complement to the greater Science-Metrix skill set. Andréa took a winding path towards evaluation, with a Masters in Journalism from Carleton University and a B.Sc. in Biology from the University of Waterloo.

Related items

/ You may check this items as well

Contribution analysis: How did the program make a difference – or did it?

Contribution analysis is an evaluation method that...

Read more

Program evaluation basics

The ScienceMetrics blog has so far focused on our ...

Read more

There are 2 comments

  • Hi Andréa:

    Mi name is Jaqueline Meza, I work as a researcher in government and public issues from a NGO. I would like to ask you two things:

    1) If you have any research or article related to the measure of “conceptual use”
    2) If you can share with me bibliography about how to design evaluations.

    Best regards,