Meditations

Thoughts on Recent Bio-Secrecy

by Alex Wellerstein, published December 21st, 2011

The New York Times ran a cover story today about a request by the National Science Advisory Board for Biosecurity to the journals Science and Nature to avoid printing certain details about new research on making a specific influenza virus strain more virulent (“Seeing Terror Risk, U.S. Asks Journals to Cut Flu Study Facts,” December 20, 2011). It’s an interesting case. Here are some quick, historically-tinted thoughts:

  • There is a lot of hemming and hawing in the article over whether this is “censorship” or not, and whether scientists will abide by it. The obvious historical parallel is the “self-censorship” campaign by the physicists in 1939 to avoid publishing on nuclear fission chain reactions. This campaign, led by Leo Szilard, is generally remembered as a failure: Frédéric Joliot-Curie completely blew off Szilard’s concerns and published a very optimistic assessment of the possibility of chain reactions, which did lead scientists in half a dozen countries to urge their governments to start nuclear weapons programs. However, in my work I argue that we should really be more surprised at how successful Szilard was. The idea of chain reactions actually being a short-term (that is, World War II-sized) problem was pretty much science fiction at that point, and Szilard did manage to get real checks on publication put in place on physics journals in the US, UK, and Denmark. This system continued even after Joliot-Curie published, and became a foundational block for the monitoring of scientific work in the area. If we measure the success of the system by one result (did people know that more than one neutron was released per fission of U-235?) then the censorship was a failure; if we measure success by the creation of a new organization and new interest in the problem, despite it appearing to be a very long-term threat, then it was something of a success. 1
  • How successful can a system of self-censorship be? Over a long time scale, probably not that successful. The sorts of “secrets” discussed in this article are going to be spread very quickly. Perhaps not by publication, which is a clunky way to spread biological techniques anyway. They will be spread by the people who do the research, who are going to be, in the course of their academic careers, training students, visiting and working with other labs, working with colleagues, adapting these techniques to new problems, getting very used to them (and with their familiarity, probably lose some of their fear of them), and so on. Here’s the rub: the more interesting these techniques are, the more important these articles are, the more important these particular scientists are, the more these things are going to spread around, unless they (foolishly) renounce the work and the techniques and go study something else. Which won’t happen, unless there perhaps really was zero way for them to publish on the subject (which has a way of killing the interest of younger students), and it’s hard to see how you would enforce that (maybe for one country or two, but it’s a little late in the game to contemplate world-wide controls on articles relevant to bio-security, and there would be quite a lot of resistance to the idea, for good reasons). That is, this is being treated (in the article) as an issue that can be really affected by leaving out a few key details in published articles. But the modes of transmission of scientific knowledge are much more broad than that, and my understanding is that in biology in particular, the level of tacit knowledge (the technique of it) is quite high.
  • As one scientist puts it, it’s going to be hard to give it only to the right people and not to the wrong people. That’s the exact difficulty of a system of secrecy: secrecy is about dividing knowledge into classes or categories and trying to make sure that the right people have access to some of them and the wrong people to others. That’s what security clearances are meant to facilitate. It’s not something you can do in an off-hand way — who is going to decide who is “right” and “wrong”? Traditionally this has meant background checks, FBI investigations, things of that nature.
  • Who is the enemy to be feared? “Amateurs and terrorists.” This is a very interesting set of targets. It calls to mind people who don’t know much about what they are doing. I am personally not too worried about those people — they are probably not going to know how to set up a facility to produce this sort of thing without also killing themselves in the process. The track record on amateur bio-terror is not that great, and, even more than nuclear weapons, there is the possibility of extreme “blowback” (you’ll kill your allies as well as your enemies if a flu sweeps the globe). Basically only for apocalyptic end-the-world types, and I’m not sure how many of those really have the means or the wherewithal to pull something like this off. There are easier ways to be an amateur terrorist.
    Here’s the enemy I’m worried about: grad students, postdocs, and professors. People who are not amateurs but maybe are interested in causing mayhem. People who know what they are doing. These are the people who historically have caused the most trouble in these sorts of realms. We spend a lot of time worrying about guys in caves using high-tech weaponry to kill us, but the track record of non-cave-dwelling scientists going rogue is pretty bad as well. The fact is, there is nothing specific about being a scientist which keeps one from doing awful things. This is where the question of how you divide up the world — who goes into what category — matters. If you’re doing it just based on university affiliations, then you might as well have no system at all.
  • Can scientific communities be voluntarily discreet? I think they can — but only for a limited amount of time. There have been moments in the history of nuclear technology where there was knowledge “known” amongst individuals in the unclassified community but not published. Many ideas about laser fusion, for example, were known by scientists in Germany in the 1970s even while they were still classified in the United States. Said scientists didn’t publish on them, but discussed them in their work, in their research, and at conferences. After about 10 years of that, though, people were itching to publish. Why? Because it’s easy to find these sorts of things “obvious” after you’ve known them for a long time, and it’s easy to feel familiar with them, and it’s easy to also see that knowledge of these little “details” alone does not a weapon make. I’m not criticizing this; it’s what one would expect. All of this assumes there aren’t people who deliberately want to publish this information — for whatever reason — finding ways to put it up on the Internet or something like that.
  • Lastly, are the prospects for being secretive about biological threats worse or better than nuclear ones? One lesson I like to hammer home when talking to people about this sort of thing is that in theory, nuclear weapons should be really easy to control. Their production is entirely dependent on large stocks of a single element that is not used for too much else (uranium, which is at the beginning of any fuel or enrichment cycle), they require (even in this age of centrifuges) reasonably large facilities with detectable profiles, requires participation from lots of different kinds of experts to pull it off (you need people with a lot of different skill sets), and they are the sort of thing that generate enough of a known threat that people are willing to go to fairly extreme measures to prevent others from getting them. And yet, we have not the greatest track record (but also not the worst) at controlling them, because they are very attractive to states. I think the prospects of controlling biological threats — or nanotechnological threats, once those get going — are very slim. I find the progress in synthetic biology very troubling for this reason. Within the next four decades or so, every university bio lab in the country (and soon after, the world) is probably going to have the capability to make customized flus, plagues, and other nasties. Yes, they might also be able to make cures for them (so the proponents say), but I see no reason to assume that the balance between offense and defense is going to be any more a parity in this field than it is in every other one (where offense usually has a steep advantage). Bottom line: We’re used to thinking of the bomb as the hard case. It’s actually the easy case. Everything gets a lot harder from here on out.

Now we return to your normal programming…

  1. For more on the Szilard self-censorship attempt, I heavily recommend Spencer Weart, “Scientists with a Secret,” Physics Today (February 1976), 23-30.[]

3 Responses to “Thoughts on Recent Bio-Secrecy”

  1. […] is massive and the uncertainties are large. (I’ve already expressed my own fears on this quite recently — I lean towards being pretty fearful, because the threshold for horribly dangerous things […]

  2. […] use for nuclear explosives.”Wellerstein, on his beautifully researched and written blog, sums up: ”Within the next four decades or so, every university bio lab in the country (and soon […]

  3. […] in fan of letting a purely security-oriented mindset dominate how we make choices, as a society. I don’t necessarily think secrecy is the answer when it comes to trying to control biology — it didn’t really work with the bomb very well, in any case. But I do think the evangelists […]