Subject-matter experts are often called upon (or at least in a position) to provide their considered opinion on a matter of public policy. While the lab, the classroom and conference table are all terra cognita for a researcher, the halls of legislative and executive buildings are usually less familiar haunts. Accordingly, some time can be valuably spent in figuring out how science types can interact effectively with government types.
At the end of May 2016, Francois Claveau gave a very interesting presentation at the annual meeting of the Canadian Society for the History and Philosophy of Science (CSHPS). His talk was about the role of experts in society, and especially their decision-making around reporting, as a way of inputting their knowledge and experience into the policymaking process. In order to explore a few different proposals, he presented a simplified, fictional example—basically, a highly idealized model of how to decide on reporting. The example goes as follows: a vaccine is being considered to treat an impending outbreak, and a relevant expert must decide either to report that it works or report that it doesn’t; in reality, it either does or does not work.
Here are the parameters of the idealized model:
- In reality, the vaccine actually does work or it does not. (Works: yes/no)
- We know that there is a certain probability of it working, and a certain risk involved.
- We must report either that the vaccine works or that it does not. (Report: yes/no)
- It is assumed that the report will be effective in prompting action.
The decision can be reduced to a 2×2 square for the yes/no possibilities listed above: four possible outcomes. Claveau went on to use this simplified model to help explain (and criticize) a few existing options to guide decision-making in this context, as well as to motivate the view he himself was putting forward.
But what I want to do here is de-idealize the model, to help bring back in some of the nitty gritty details that make the science–policy interface at once so apparently intractable and so intriguingly textured. Here are the expanded parameters that I have in mind as candidates to re-introduce to our decision-making model:
- In reality, the effectiveness of and the risks associated with the vaccine will probably vary across different segments of the population—age, gender, preceding level of health, etc.
- Epistemically, we more likely than not have a probability distribution of a number of different possible outcomes. There may be one potential outcome that’s considerably more likely than the others, and there may also be some less likely candidate outcomes that still have a non-negligible probability of coming about. The state of our knowledge probably also required piecing together not-immediately-comparable evidence, each with a level of confidence associated with it.
- As far as prompting action goes, a report may incite action or it may not. And even if it does, what action it prompts is a totally open question. (This is a point that I feel academics do not fully appreciate—having the requisite knowledge is not on its own enough to bring about the right outcome. We philosophers may be the guiltiest of all of making this assumption.)
- The choice for communication is never limited to simply reporting or not. We can report formally, we can make an informal remark, we can ask whether anyone has been working on this, we can do any number of things! And we can choose the dance partner with whom we will communicate. And we can choose the tone of our communication, expressing urgency, anger, concern, curiosity, and so forth. (Again, probably an area of weakness for the academic crowd when it comes to interacting with the policy/political sphere.)
- The relationship between knowledge and action in these circumstances is often complicated by the fact that policy questions and research methodologies move in different conceptual spaces. Translating between these spaces to determine what evidence/methods would be relevant to the policy question, and in turn what action is actually justified based on the research findings, is no straightforward task, and one further complicated by the cultural and organizational isolation of the science and policy communities.
- And none of this even begins to touch on our attitude about the position that we stake. Do we feel that our proposed course of action (if indeed we are proposing one) is the right thing to do, or that it’s simply the best option we have? The sharpened skepticism inherent in science probably attunes scientists to the fallibility of our answers—the fact that they are inherently flawed and yet still the best we have to go on. By contrast, political discourse (and I would say policy discourse as well) seems to lean towards branding the selected course of action as the right thing to do, full stop.
On that last point, I would advocate strongly that we need to acknowledge openly that in policy contexts we’re usually grappling as best we can with a problem that we probably wish we understood better before being forced to make choices about how to handle it. This sort of attitude reinforces the idea that we need to be open to new evidence and new ideas when they arise, not getting too married to our beliefs. Furthermore, it reinforces the idea that we should be documenting the rationale for our choices (evidence, and considerations in the weighting of competing evidence). This documentation need not show why the path taken was the right one, but only why it was the best one we had available at that particular fork in the road. Additionally, this approach shifts the burden of proof onto future critics (and rightfully so), to demonstrate which option was better at the time rather than simply to criticize the imperfect outcome of working under non-ideal conditions—as though somehow that was a novel insight.
Bringing these remarks back to focus on Claveau’s presentation, according to his proposed model experts should be relying on established professional norms within the research community (where possible), and documenting the rationale for their decisions (as I’ve noted above). I think that his idealized model was very helpful in elucidating his suggested approach, and I think that the recommendations he’s advocating are sensible, practicable, and compelling. My point here has been to start de-idealizing his model, to see just how deep that proposed approach can be, and I think there’s a lot there to unpack. My additions to what he’s started, then, are about maintaining a humble attitude regarding our state of knowledge, and about needing to better understand the mechanisms of communication with the policy/political spheres and the different audiences within those spheres with whom to communicate.
All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.