“Saving Science,” the (still relatively) new article from Daniel Sarewitz published in The New Atlantis, has been getting a lot of attention in the science and science policy communities, and for good reason: the point that he’s making is one that’s very important right now in a context of government budgetary constraints and heightened scrutiny of research. He argues that science advances most when it is directed toward specific problems, especially those of technological innovation, and ties itself in knots when left to be lead by the curiosity of researchers. Consequently, research should be directed to solve concrete, mostly technological problems, and it should be held accountable on this basis. Of course, he’s not the only author to be discussing the shortcomings of present evaluation and reward systems; others have also reported on the perverse effects within the scientific community that can result from diminishing research funding and hyper-competition. So what’s all this fuss about, anyway?
Sarewitz’s discussion aims to take down a “beautiful lie,” advanced by Vannevar Bush, that science must be allowed to advance under its own internal logic, driven by the free-spirited curiosity of researchers, unaccountable to the accountants and their economic pressure, and provided a windbreaker from political blowhards. In fact, beautiful as this idea is, Sarewitz concludes that it is patently false, and that heading down this rabbit hole has provided us only “supposed knowledge,” which all too often turns out to be “contestable, unreliable, unusable, or flat-out wrong.”
But surely science has made at least some advances in the last couple of millennia, no? According to Sarewitz, the US Department of Defense (DOD) has been the major driver of that advance, and he identifies three key points for his position:
- Science advances most rapidly when it is directed toward specific problems, especially those of technological innovation.
- When science isn’t directed in this way, it ties itself in knots and produces “supposed knowledge.”
- Science itself would benefit from being exposed to societal end-user pressures, rather than being insulated from them—“carefully and appropriately” exposed, of course.
The DOD was so successful apparently because it was able to identify technological goals, deemed crucial to national security, and thereby outwit both “the logic of the marketplace” and “the capriciousness of politics.” Examples include the development of computers, jet engines, and transistors. These days things are different, though; without a good, old-fashioned Cold War, the bureaucrats and politicos are choking DOD’s innovation. Cramping its style, even. Furthermore, the projects that DOD is developing these days also have less potential for spillover—ostensibly—into non-military applications than its projects in the past. (It’s almost like Sarewitz has never seen Domino’s deliver a pizza with a drone.) Nonetheless, DOD did three things right, according to Sarewitz:
- They brought the right people together, bright minds and all that.
- They disciplined this group by giving them a very tangible, technological goal to accomplish.
- They shielded the group from market rationality by selecting a goal that seemingly no venture capitalist in their right mind would bet on.
The article isn’t all speculative, as Sarewitz does explore some specific examples to illustrate and back up his points. However, the claim that “for much of human history, technology advanced through craftsmanship and trial-and-error tinkering, with little theoretical understanding” rings a bit hollow when one thinks back on individuals such as Archimedes, Galileo and da Vinci. Surely these three individuals had pretty good street cred with both the theoretical and technological posses; they rolled with two crews.
In fact, Sarewitz goes so far as to claim that technology has always led the way, with theoretical research following behind in its wake. I won’t offer an in-depth discussion here of the relationship between theoretical research, measurements, and the technological tools that we use to take measurements (though I’ve done some of that work elsewhere, if you’re interested and feeling masochistic), but I will state that I think a better view of the relationship between theory and practice sees them as quite entangled. They’re dance partners, but neither leads; it’s wonderfully chaotic.
And in fact the idea that pure science and applied science are two separate and separable things (or that theory and practice are, or that science and technology are), is a problematic view altogether. Sarewitz may argue that technology has cleared the path for science “for much of human history,” but would Archimedes, Galileo and da Vinci even understand our current distinction, or has “much of human history” taken place since the 19th century began? Perhaps Benoit Godin was right when he argued that basic research has unresolvable social, political and maybe even moral dimensions to it. He argues that basic and applied are two ends of a spectrum, not two distinct categories. This kind of position presents no problem to my preferred view of a mish-mash of technological and scientific progress, hurtling chaotically towards the future, but if indeed no surgery can successfully separate these Siamese twins, then Sarewitz’s position is in trouble.
So his suggestion is that technological application needs to lead the way, because apparently pure science can only set standards for itself that lead to “supposed knowledge.” But this underestimates the role that external influences can have in perverting science—and consequently their role in contributing to the production of “supposed knowledge”; it leaves in a bind all those sciences that don’t have a clear technological application, as they would no longer have any way to set goals at all, and it delegitimizes the fact that scientists themselves are often able to make a practical delineation between good and bad science—after all there are indeed scientists who, even from within this too-closed circle, are decrying the proliferation of junk research.
Alright, I’ve had enough fun expounding in hyperbole: time to put my own view on the line. Sarewitz is indeed correct that breaking open the closed circle of science has a lot of value to bring. Insulated research systems really can spiral into navel gazing, and steering out of that tail spin is important. (Adam Briggle and Bob Frodeman draw similar conclusions about philosophy, and very compellingly I might add!) How can we best do so? By establishing a healthy balance between internal and external measures of value and quality. Being totally beholden to external measures can be just as vicious as being totally beholden to internal ones; after all, there’s a long and colorful history of “accountability” being used as a weapon. In the context of research, academic departments sometimes receive lower evaluation scores than parking complexes in university reviews for “accountability” purposes, and government departments conducting environmental research sometimes get the ax (or in smaller scale, the hatchet, or perhaps the pen knife) for “not offering value” when their research has the potential to derail the best-laid economic plans.
We need to balance internal and external pressures on the research system. As Sarewitz himself argues, it is the internal pressures overpowering external ones that’s apparently got us into this mess, but it was the ability of DOD to keep at bay certain external powers (name, of marketplace and legislature) that ostensibly made it so successful. Sarewitz praises a cancer stakeholder group for getting a bunch of researchers interested in “revisiting dogma.” Let’s not replace one dogma by another, swinging the pendulum from overpowering internal pressures to overpowering external pressures. The tension between these pressures must be sustained, not resolved. “Everything should be made as simple as possible, but not simpler.” Somebody smart said that once.
**Addendum, based on further discussion**
There are good pressures and bad pressures, and there are internal pressures and external pressures. These don’t line up in any obvious way, as there are good and bad inside just as there are good and bad outside. Definitely inside and outside pressures need to be balanced against each other, but good pressures also need to overpower bad ones. Perhaps a point that I haven’t stressed enough is that I think a mix of pressures actually helps us to identify good ones and bad ones. Bob and Adam’s new book makes an interesting case in this respect, as they show nicely how a completely insulated professional circle can talk itself into the most peculiar places and establish the most convoluted and perverting measures of quality. The whole thing becomes an echo chamber, where strange pronouncements can become accepted truths—totally banal, and absolutely beyond interrogation.
How, then, does the inside/outside balance help to delineate good things from bad? People outside the echo chamber are less likely to hesitate in telling us that something is totally nuts. And they’re also less likely to hold back from telling us something that might seem (to us) to itself be totally nuts. Adam and Bob show nicely that there are virtues to disciplinarity, but that these virtues totally miscarry if we lock ourselves inside an ivory tower, forget that an outside world exists, and allow to close the critical distance from our own practice that allows us to distinguish the good pressures from the bad.
All views expressed are those of the individual author and are not necessarily those of Science-Metrix or 1science.