In January, Sir Peter Gluckman—Chief Science Advisor to the PM of New Zealand, and global point man for science advice to government—gave the inaugural address at the Canadian Science Policy Centre lecture series. The discussion covered a lot of important points of difficulty for science and governance—and science in governance—that are emerging in the 21st century as a result of the rapid development of information communication technologies. In this post, I’ll recap a few of his main points (with my usual editorial gusto), and add further detail to one point that I felt was ambiguous and on which it’s worth getting clear.
Sir Peter opens with a nice statement of the paradoxical situation in which we find ourselves these days: information has never been easier to share and access than it is now, and just at the moment when the bottlenecks to transmission of information have been opened up, the reliability of that information itself—the content flowing through these ICT conduits—is being tainted by the ease with which misleading information is propagated and with which downright false information can be created and propagated.
Misinformation, whether intentional or not, is very hard to tamp down once it picks up speed. It turns out that misinformation is usually much more interesting than boring, old reality, and popularity is at the base of most algorithms to prioritize information. (Gotta make sure we’re as cheery as can be when we get smacked with all that precious advertising!) Furthermore, we run into strong confirmation biases, scrutinizing information that conforms with our beliefs much less than we scrutinize information that contradicts them. We also make up our minds quickly: a given piece of information carries more sway by arriving first on the scene than by being true. Finally, we create personalized echo chambers in our social media worlds, bumping up the information we already agree with—to give the old confidence a little boost—rather than forcing ourselves to encounter the altogether disagreeable people and ideas with whom we disagree.
Wither the experts in all this, to set us all straight? The current anti-elitist sentiment, coaxed into flame by so many years of decision-making taking place in smoke-filled rooms that went well for some but not most of us, is very easily brushed right onto experts as well. After all, if there’s anything that our society has taken from Derrida and Foucault, it’s that belief is all there is, and anyone purporting to sell “truth” is actually just selling you beliefs with healthy side of coercion—“knowledge” is just a word that powerful people use to manipulate the weaker-thans, who only get to have beliefs or “perspectives.” In such a situation, an expert is just someone with an opinion and the power to make other people hold it too. Little wonder, then, that they have trouble setting the record straight on manifest untruths. Littler wonder still that some pretty powerful people manipulate these sentiments to assure continued power.
In this situation, how do we ensure that evidence can find a seat at the policymaking table? As Gluckman points out, the understanding of the term “evidence” is usually broader in the policy world than in the scientific sphere. “Evidence” in the policy context often covers knowledge gained through personal experience, cultural heritage, and other sources. While lumped in with those other things that are not scientific, science still retains its distinct character in virtue of its methods, which are designed to overcome a number of known cognitive biases (even though some of the structures of science tend to push us right back into those biases).
However, scientists should not be so presumptuous as to think that science is all and only what’s needed to solve policy discussions, a presumption that Sir Peter calls out under its wonderfully Hellenic name—hubris. This kind of view ultimately impedes the integration of science into decision-making.
Gluckman echoes Paul Cairney, who claims that scientists are good at defining problems, “but not so good at finding policy-acceptable, scalable and meaningful solutions.” Accordingly, he goes on to argue that scientists need to be good team players in the policy-making game, and I’ll break off here from the rest of his argument, with the recommendation that folks interested in the exciting conclusion read or watch the rest, and also potentially check out this paper for ideas about how to engage policymakers.
I break off here because I’d like to explore the idea that scientists are good at formulating problems but bad at finding solutions, which strikes me as a bit ambiguous, and potentially very problematic if interpreted the wrong way. If we take Gluckman’s statement as meaning that scientists can define problems well on their own, but that the results are not relevant, then we seem to be already heading down the path towards hubris that Gluckman worries about. After all, if someone struggles all that much to find workable solutions, one can rightly wonder about whether they’re asking the right questions at the outset. If we start out with the idea that researchers on their own are going to formulate the problem, we shouldn’t be surprised when the problem comes out looking a lot like a research problem first and a policy problem second.
To avoid this road towards hubris, I contend that we need to interpret Gluckman’s statement as meaning that scientists are really valuable people to have at the table as we formulate our policy problems, and as we outline ways to gather information relevant to finding a solution. The solutions that we explore are going to be importantly informed by the way that we frame our initial problem. But scientists don’t have all the tools for and answers to policy problems, and need to stop believing that they do—this is exactly Sir Peter’s point. Keeping this in mind, along with the idea that formulating our problem will have an important impact on the subsequent investigations we undertake and solutions we devise, we should not say that scientists on their own are good at formulating problems, lacking only in ability to find viable solutions.
This strikes me as an appropriately charitable interpretation of what Gluckman urges—but I think that this specification is important nonetheless, as misinterpretation here would have substantial implications. What are the practical issues at stake? How we think about this issue will have a strong impact on who’s around the table at which points in the process—and we know that process is an important ingredient in getting the right evidence, in the right form, to the right people, at the right time. Integrating science into policy requires the co-production of knowledge, and that means integrating more people into the knowledge-making process, not just packaging it differently once it’s been produced. And this is the heart of my worry, here: in saying that scientists are good at formulating problems, we risk excluding people from the knowledge-production process at a crucial moment. We need the right people at the table when it comes time to operationalize a policy question into a study design; scientists are only some of those people.
This more open conception of knowledge-production can also act as a salve on the open sore of irritation with elites, whose main interest is to control the process and the people in it, for personal rather than broad-based benefit. If scientists don’t want to be painted with the same brush, they need to take extra care in ensuring that their actions make a clear statement of inclusion, rather than dismissing and speaking over those who don’t immediately share their point of view. We could all use a little humility in the policy and political processes, scientists included.
All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.