Contribution analysis: How did the program make a difference – or did it?
March 6, 2018
, , , , ,

Contribution analysis is an evaluation method that was developed by John Mayne in the early 2000s to enable evaluators to produce rigorous impact analyses in the context of programs that cannot be evaluated using an experimental or quasi-experimental design. While the Science-Metrix Evaluation team has been conducting contribution analyses informally for a while now, the workshop conducted by Thomas Delahais (of Quadrant Conseil) for the SQEP annual conference in October 2017 inspired us to use this technique in a more rigorous, systematic and comprehensive way. This method appeared to us as perfectly suited to the kinds of evaluations we, and probably many of you, carry out—that is, evaluations of complex, multifaceted programs for which counterfactuals cannot easily be used. In this post, I’ll give a brief synopsis of the method and provide suggestions for further reading.

The main purpose of the contribution analysis method is to understand how and why a program has contributed—or not—to the observed outcomes. It requires an in-depth understanding of the theory of change, or logic, of the program. In other words, it requires an understanding of the successive causal relationships between the program inputs, activities, outputs, and immediate, intermediate and long-term outcomes—often presented graphically in a logic model.

Questions, theories and logic models

The whole contribution analysis process is guided by a clear causal question, which the evaluator formulates while building the theory of change. For instance, an evaluator assessing a research funding program might ask, “How has the funding provided by the program contributed to increasing the scientific impact of the funded researchers?” The theory of change involves identifying the underlying hypotheses—that is, the conditions under which the logic of the program holds. For instance, a critical assumption could be that researchers are not overburdened by program-related administrative duties, so they can spend most of their time conducting their research. The logic of the program could also be based on theoretical assumptions that have not been validated in the literature, such as a positive correlation between interdisciplinarity and innovative research. External factors as well as alternative explanations for the observed changes, such as similar competing programs, should also be identified at this stage, as these could reduce or even replace the contribution of the evaluated program to the outcomes.

The evidence

The contribution analysis process will then involve collecting qualitative and quantitative evidence to validate or invalidate the various hypotheses or potential contributions identified in the theory of change. A variety of data collection tools can be used, including surveys, interviews and document reviews. Case studies are often a very powerful method to gain the kind of in-depth understanding of how the program works “in the real world” that is required for a contribution analysis. For a comprehensive and rigorous analysis, data should be collected to demonstrate the linkages between the program and the outcomes, and to test and qualify other identified influences, including alternative explanations, contextual factors and critical assumptions. For instance, in our practice, we have often sought descriptive information about similar programs and the extent to which researchers have applied for and/or obtained funding from these other programs. To do so, we have used survey and administrative data to first identify which programs were most likely competitors to the program being evaluated, and then conducted case studies or interviews to get more details regarding the relative importance of their contribution to the outcomes. Most often, this analysis has led us to identify areas of complementarity and duplication between programs, enabling us to delineate the specific impact of the program being evaluated. In rare instances, we have found that the program was actually duplicating without adding much value to other existing initiatives.

Telling the story

Next in the contribution analysis is the drafting of the contribution story, which is a short narrative identifying all the observed changes and the corresponding contributions of the program and of other significant influences. These contributions should be clearly explained and contextualized, to help the reader grasp the nature and limits of the impact of the program. Ideally, this contribution story should be subject to a critical peer review by independent experts, or at least colleagues who have not been involved in the evaluation project. The narrative should then be revised and finalized based on the reviewers’ comments, as appropriate.

While the principles of contribution analysis were established 20 years ago, it is only recently that the practicalities of this method have been explored and documented in the literature. Here are a few links to articles that describe in more concrete terms how this method has been or could be used in real evaluations.

Delahais, T., & Toulemonde, J. (2017). Making rigorous causal claims in a real-life context: Has research contributed to sustainable forest management? Evaluation, 23(4), pp. 370–388. doi:10.1177/1356389017733211.

Befani, B., & Mayne, J. (2014). Process Tracing and Contribution Analysis: A Combined Approach to Generative Causal Inference for Impact Evaluation. IDS Bulletin, 45(6), pp. 17–36. doi:10.1111/1759-5436.12110.

Budhwani, S., & McDavid, J. C. (2017). Contribution analysis: theoretical and practical challenges and prospects for evaluators. Canadian Journal of Program Evaluation, 32(1). doi:10.3138/cjpe.31121.


Note: All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.


About the author

Emmanuelle Bailly

Related items

/ You may check this items as well

Maximizing the use of evaluation findings

In a 2006 survey of 1,140 American Evaluation Asso...

Read more

Program evaluation basics

The ScienceMetrics blog has so far focused on our ...

Read more

There are 2 comments

  • Jean-Pierre Nioche says:

    I am always surprised by the mention of John Mayne as the founder of contribution analysis. He has effectively popularized this approach, in particular by playing with the words “attribution” and “contribution”. But the founding work of this type of evaluation seems to be :
    Huey-Tsyh, Chen, “Theory-driven evaluations”, Sage Publications, Newbury Park (Ca), 1990

    • Emmanuelle Bailly says:

      Thank you for your input on this post, Jean-Pierre! Are there specific sections of the work of Chen that lay the foundations of the contribution analysis method? If you could indicate these and perhaps provide a short synthesis of Chen’s main points that you could leave in a follow-up comment, I’m sure that interested readers would find that very valuable, as would I.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *