The blue flash

by Alex Wellerstein, published May 23rd, 2016

This last weekend was the 70th anniversary of Louis Slotin’s criticality accident. One slip of a screwdriver; a blue flash and wave of heat; and Slotin had a little over a week to live. It’s a dramatic story, one that has been told before. I tried to give it a little bit of a fresh look in my latest piece for the New Yorker’s Elements Blog: “The Demon Core and the Strange Death of Louis Slotin.”

Demon Core New Yorker Screenshot

In researching the piece, I looked over a lot of technical literature on the accident, as well as numerous accounts from others who were in the room at the time. A few things stuck out to me that didn’t make it into the piece. One was that it was remarkably non-secret for the time. Los Alamos put out a press release almost immediately after it happened (by May 25th, five days before Slotin’s death, it was in national newspapers), and followed it up with more after Slotin’s death. For mid-1946, when the Atomic Energy Act had not yet been signed and the future of the American nuclear infrastructure was still very much in question, it was remarkably transparent. The press release was where I saw the phrase “three-dimensional sunburn” for the first time.

I also went over the account of Slotin’s case that was published in The Annals of Internal Medicine in 1952. Slotin isn’t named, but he’s clearly “Case 3.” Harry Daghlian, who also died from an accident with the same core, is “Case 1,” and Alvin Graves, who was the nearest person to Slotin during his accident, and later became a director of US nuclear weapons testing, is “Case 2.” The article is long and technical, and ends with some of the most disturbing photographs I have ever seen of the Daghlian and Slotin accidents. There is a photo of Daghlian’s hand that has been reproduced many places (including in Rachel Fermi’s Picturing the Bomb), but I’d only previously seen it in black and white. It is much worse in color — the contrast between the white blistered skin and the pink-red stuff under the cut-away area is dramatic and disturbing. There are others in the same series that are just as bad if not worse: blackened, gangrenous fingers. Slotin’s photos in that article are comparatively tame but still pretty unsettling. Blisters. Cyanotic tissue. A photograph of his left hand — the one that was closest to the reacting core — on the ninth day of treatment (his last day alive) looks almost corpselike, or even claw-like. It is unsettling. I will not post it here.

An anonymous e-mail tipped me off that there were more photographs, and more documents, at a collection at the New York Public Library. These were part of a collection deposited by Paul Mullin, who authored the Louis Slotin Sonata, a very interesting, very curious play about Slotin from the late 1990s. I haven’t seen the play, though I had seen mentions of it for awhile. Mullin’s materials were fascinating and very useful. There were two boxes. The first was mostly notes relating to the creation of the play. It is always interesting to see how another researcher takes notes, much less one whose end-product (a play) is very different from the sort of thing I do. It does not take much glancing at his notes to see that Mullin got as deep into this topic as anyone has. The second box contained research materials: four folders of documents obtained from Los Alamos under the Freedom of Information Act, and a folder of photographs.

The hands of Louis Slotin, shortly after admission to the Los Alamos hospital. Source: Los Alamos National Laboratory, via the New York Public Library (Paul Mullin papers on the Louis Slotin Sonata).

The hands of Louis Slotin, shortly after admission to the Los Alamos hospital. Source: Los Alamos National Laboratory, via the New York Public Library (Paul Mullin papers on the Louis Slotin Sonata).

The photographs were, well, terrible. They included the ones from the Annals of Internal Medicine article, but also many more. Some showed Slotin naked, posing with his injuries. The look on his face was tolerant. There were a few more of his hand injuries, and then the time skips: internal organs, removed for autopsy. Heart, lungs, intestines, each arranged cleanly and clinically. But it’s jarring to see photographs of him on the bed, unwell but alive, and then in the next frame, his heart, neatly prepared. The photo above, of just his hands, is one of the tamest of the bunch, though in some sense, one of the saddest (there is a helplessness, almost like begging, in the position). I didn’t make copies of the really awful ones. History is often very voyeuristic — I joke with students that I read dead people’s mail for a living — but, as I commiserated with Mullin over Twitter, at some point you start to almost feel complicit, as silly as that notion is.

The documents were invaluable. They mostly covered the period immediately after the accident — people checking in on Slotin’s health, the complicated legal aspects of dealing with the death of a scientist (and with his distraught family), the questions of what to do next. An inordinate amount of paperwork was generated in dealing with the disposition of Slotin’s automobile (a 1942 Dodge Custom Convertible Coupe). The Army’s interactions with Slotin’s family appeared sympathetic and generous. There appears to have been no cloak-and-dagger regarding the entire affair. Slotin was, after all, a friend to many of those at Los Alamos, and a key member of their “pit crew.”

One of the accounts that I found most fascinating was that of the security guard, Patrick Cleary, who was in the room when the accident happened. Cleary was there because you don’t just keep a significant proportion of the nation’s fissile material stockpile unguarded. He seems to have understood little about what risks his job entailed, though:

When the accident occurred, I saw the blue glow and felt a heat wave. I knew something was wrong, but didn’t know exactly what it was, when I saw the blue glow and somebody yelled. … Our instructions are also to keep in sight of all active material that is around, except in the case of a critical assembly, but [I] am not sure about that. I did not actually know what the material or sphere was at the time, or anything about it.

When Cleary saw the flash and heard yelling, he literally took off for the hills, running. He was called back, as the scientists tried to reconstruct where people were standing for the purposes of dosage calculation. Cleary, in fact, was the last person to leave, because security guards can’t walk off the job — he had to wait until a replacement came.

Close-in shot on the Slotin accident re-creation. The beryllium tamper is on top; the plutonium core is the smaller sphere in the center. Notice in this particular shot, they have a "shim" on the right. Slotin removed the shim right before his fatal slip.

Close-in shot on the Slotin accident re-creation. The beryllium tamper/reflector (they called it a tamper) is on top; the plutonium core is the smaller sphere in the center. Notice in this particular shot, they have a “shim” on the right. Slotin removed the shim right before his fatal slip. The scientist re-creating the photograph is physicist Chris Wright. I wonder if they took extra precautions in making this particular set of photos?

For a long time I had been wondering what happened to the so-called “demon core,” which was also known as “Rufus,” something that strikes me as just too strange to be anything but true. It has been reported many times that it was used at Operation Crossroads, at the Able shot. I found some documentation that suggested this was very unlikely. For example, shortly after the accident (Slotin was still alive), lab directory Norris Bradbury wrote to a few other scientists at Los Alamos about how the accident had affected the forthcoming Crossroads tests. He notes that the sphere in question was getting “its final check” during the accident — so it was definitely slated for Crossroads. But he continues:

Obviously Slotin will not come to Bikini. [Raemer] Schreiber will come although the date of special shipment was postponed one week to allow us to pull ourselves together. Only two shipments will be made at this time as I see no courier for the third. The sphere in question is OK although still a little hot but not too hot to handle. We will save it for the last in any event if it is needed at all.

Which seemed pretty suggestive to me that they weren’t going to use it: only two shipments were going to be made early on, and “the sphere in question” was not one of them. It would be saved for the “last event.” Which in this case was the “Charlie” shot — which was cancelled.

I wanted some more confirmation, though, because a plan isn’t always a reality. I e-mailed John Coster-Mullen, who I knew had done a lot of research into the Slotin and Daghlian accidents. (John is the one that provided me with these wonderful high-resolution photographs of the Slotin re-enactment, and some of the documents in his appendices to Atom Bombs were very useful for this research.) John suggested I get in touch with Glenn McDuff, a retired scientist at Los Alamos who was also one of the consultants on Manhattan (he drew the equations on the chalkboards, among other things). This turned out to be a great tip: Glenn has been working on an article about the fate of the first eight cores. There is much still to be declassified, but he was able to share with me the fate of the core in question: it had not been used at Crossroads, it had been melted down and the material re-used in another core. Glenn says there was no particular reason it was melted down. It was old, as far as cores went, and they were constantly fiddling with them in those days — the days in which they still gave bomb cores individual nicknames, because there were so few of them.

For nuke nerds, this is the big “reveal” of my New Yorker piece, the one thing that even someone very steeped in Los Alamos history probably doesn’t know. (For non-nuke nerds, I doubt it registers as much!) And even though it is a bit anticlimactic, I actually prefer it to the version that the core was detonated shortly after the accident. The part about them immediately re-using the core in a weapon just always seemed a little suspicious to me — it almost implied that they had done it due to superstition, and that didn’t really jibe with my sense of how these scientists viewed the accident or these weapons. And even the anticlimax has a bit of a literary touch to it: the “demon core” wasn’t expended in a flash, it was melted down and reintegrated with the stockpile. Who knows whether bits of its plutonium ended up in other weapons over the years, whether any of that core is still with us in the current arsenal? There’s perhaps something even a bit more “demonic” about this version of the story.


Mapping the US nuclear war plan for 1956

by Alex Wellerstein, published May 9th, 2016

A few months back, the National Security Archive made national headlines when they released a 1956 US target list they had obtained under the Freedom of Information Act. The target list outlined over a thousand Strategic Air Command nuclear targets in the Soviet Union, Eastern Bloc, the People’s Republic of China, and North Korea. The Archive had posted a small graphic of the ones in Eastern Europe, but hadn’t digitized the full list. Several weeks ago, the people at the Future of Life Institute did just this, digitizing the complete dataset — no small task, given that these were spread over several hundred, non-OCR-able pages of smudgy, 60-year-old government documents.

A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

I recently attended a conference that the FLI put on regarding nuclear war. FLI was co-founded by the MIT physicist Max Tegmark and his wife Meia (among a few others), both of whom I was glad I got to spend some time with, as they are interesting, intelligent people with interesting histories. They are interested in promoting work that decreases existential threats to the human race, which they see as possibly including things like nuclear war and nuclear winter, but also unhampered artificial intelligence, climate change, and the possible negative futures of biotechnology. These are all, of course, controversial topics (not always controversial among the same groups of people, to be sure). They’re an interesting group, and they are stirring up some interesting discussions, which I think is an unambiguously positive thing even if you don’t agree that all of these things are equally realistic threats, or threats on the same level.

The FLI's digitized version of the target list. Click the image to view their interactive version.

The FLI’s digitized version of the target list. Click the image to view their interactive version.

The target list, mapped out as the FLI did above, is already pretty impressive. While I was at the conference, I got the idea that it wouldn’t be that hard to reconfigure a few parts of the NUKEMAP code to allow me to import huge numbers of target lists in the right format. NUKEMAP already supports the targeting of multiple nukes (the feature is a little cryptic — you create a detonation, then click “launch multiple,” then move the cursor and can then create another one, and repeat as necessary), but it didn’t have any automatic way of importing a large number of settings. Once I had done that, I then thought, what would it look like if I used realistic weather data to determine the fallout patterns from surface bursts? It only took a little bit of further work to write a script that can poll OpenWeatherMap‘s public API and grab information about real-time wind speed and direction information about any given set of coordinates. This renders quite an impressive image, though to do this for some 1,154 targets requires a lot of RAM (about 1.5 GB) and a fast computer. So it’s not something one wants to necessarily do all the time.

I have captured the results as a series of interactive screenshots, to better save you (and your web browser) the trouble of trying to render these yourself. You can see how changing the yield dramatically changes the fallout (assuming surface bursts, of course). The interactive viewer is available by clicking the image below, or this link.

Screenshot of my interactive viewer for the nuclear war plan. Click to view.

Screenshot of my interactive viewer for the nuclear war plan. Click to view.

I also sampled weather data from a few days in a row, to see what differences it made from a practical standpoint. It is remarkable how different wind speed and direction can vary from day to day. In some of these “simulations,” Copenhagen, Denmark, avoids fallout. In others, it does not. Under some weather conditions (and yield selections), northern Japan gets some fallout from an attack on the Soviet-controlled Kuril Islands; in others, it does not. The NUKEMAP’s fallout estimator is, of course, a very simplified model, but even with that you can get a sense of how much difference a shift in the winds can make.

Having done that, I started to wonder: what would the casualties of such an attack look like? I don’t have population density data of the relevant areas from 1956 that has sufficient granularity to be used with my normal NUKEMAP casualty estimating script, but I figured that even the present-day population figures would be interesting. If you try to query the casualty database with over a thousand targets it just says “no,” so I wrote another script that would query it target-by-target and tally the results.

The results were a bit staggering. I mean, I assumed it would be a large number. But they are really large numbers. Some of this is because the casualty script is double-counting “victims” when they are inside the relevant blast areas of multiple detonations. At the moment, there’s no easy way around that (even for a small number of detonations, keeping track of who is already “dead” would require a lot of time and processing power, and to do it on the scale of a thousand is just not possible with the way it is set up currently).

An example of an area where a lot of "double-counting" is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill...

An example of an area where a lot of “double-counting” is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill…

On the other hand, the casualty estimate does not take into account fallout-related casualties, or the long-term casualties caused by the destruction of so much infrastructure. The target list also doesn’t tell us how many targets were, in fact, targeted redundantly with multiple weapons — the idea that it might have been “one nuke, one target” is definitely an incorrect one. Even before World War II had completely ended, US planners for nuclear war against the Soviet Union understood that not every bomb would make it to a target, and so planned for multiple weapons to be targeted on each. So “double-killing” those people in some of these locations is probably not so wrong. It likely isn’t all that crazy to think of these numbers as back-of-the-envelope estimates for what would result if you waged this kind of attack today (which is not to imply that the US would necessarily do such a thing). But I don’t want anyone to think I am implying any kind of real certainty here. I would, in fact, be dubious of anyone, at any time, implying a lot of certainty about these kinds of things, because we (fortunately) lack very much first-hand experience with this kind of “data,” outside of the results at Hiroshima and Nagasaki, which were in many ways particular to their time and place.

Casualty figures, of course, require making assumptions about the size of the nuclear weapons used, as well as the fuzing settings (airbursts generate far less downwind fallout in comparison to surface bursts, but they can greatly increase the casualties for people in civilian structures). For 1956, there would have been a “mix” of yields and types of weapons. We don’t have data on that to my knowledge. As a simplifying assumption, I just ran the casualty calculation with a number of yields, and with both surface burst and airbursts (optimized to increase the range of the 5 psi blast area) options. For the sake of space and avoiding the appearance of false precision, I have rounded them to their nearest million below:

surface burst airburst
injuries fatalities injuries fatalities
10 Mt 259 239 517 304
5 Mt 210 171 412 230
1 Mt 120 70 239 111
500 kt 89 46 185 77
100 kt 39 16 94 30
50 kt 25 10 66 19

At first I thought some of these numbers just seemed fantastical. Russia today only has a population of 140 million or so. How could we get up to numbers so high? Some of this is, again, because of double-counting, especially with the very big bomb — if you run a 10 Mt bomb on Moscow kills 5.5 million people, and injures 4 million, by NUKEMAP’s estimate, which combined is 70% of the 13 million people in the area of the 1 psi blast radius of such a weapon. (If that seems high, remember that a 10 Mt bomb goes well outside the city of Moscow itself — the Great Moscow Metro Region is about 16 million people total.) Since a large number of nukes were targeted around Moscow, that’s a lot of double counting, especially when you use them with such high-yield weapons.

So the very-big numbers I would take with a very hefty grain of salt. NUKEMAP’s casualty estimator really isn’t meant for guessing multiple, overlapping damage areas. At best, it attempts to give back-of-the-envelope estimates for single detonations. Separately, the US arsenal at the time was around 10,000 megatons worth of destructive power. So they obviously couldn’t have been (and wouldn’t have been) all multi-megaton monsters. But, all the same, I don’t think it’s at all improbable that the multi-megaton monsters that were in the arsenal would have been targeted at heavily populated regions, like Moscow. Especially given the fact that, again, there would have been multiple nukes aimed at each target.

I also thought it would be interesting to take the casualties and break them apart by region. Here’s where I found some really startling results, using a 1 Megaton (1,000 kiloton) airburst as my “model” detonation, again in millions:

injuries fatalities
Soviet Union 111 55
Warsaw Pact 23 10
China + North Korea 104 46
239 111

To make this point more clearly: 820 of the 1,154 targets were inside the Soviet Union proper. They are responsible for 48% of the casualties in the above scenario. Non-Soviet countries in the Warsaw Pact (Eastern Europe, more or less), were responsible for “only” 188 of the targets, and 9% of the casualties. China and North Korea had only 146 of the targets, but were accountable for 43% of the casualties. Which is to say, each “detonation” in the USSR on average produced around 203,000 casualties on average, each one in Eastern Europe around 176,000, and each one in Asia is over 1 million. That’s kind of bananas.

Now, these use modern (2011) population density figures, not those of 1956. But it’s still a pretty striking result. Why would this be? Partially because the Asian targets seem to be primarily in large cities. Many of the Soviet targets, by contrast, are of pretty isolated areas — remote military airfields in some cases — that only kill a few hundred people. It would make for a very interesting study to really get into the “weeds” of this target plan, and to sort out — systematically — what exactly was being targeted in each location, as best as we can. If we did that, we’d possibly be able to guess at whether an airburst or a surface burst was called for, and potentially even be able to judge target priorities, though the “bomb-as-you-go” method of attack used in the 1950s probably means that even low-priority targets would get nuked early on if they were on a path to a higher-priority one.

Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

What does this exercise tell us? Two things, in my mind. One, this 1956 target list is pretty nuts, especially given the high-yield characteristics of the US nuclear stockpile in 1956. This strikes me as going a bit beyond mere deterrence, the consequence of letting military planners have just a little bit too much freedom in determining what absolutely had to have a nuclear weapon placed on it.

The second is to reiterate how amazing it is that this got declassified in the first place. When I had heard about it originally, I was pretty surprised. The US government usually considered target information to be pretty classified, even when it is kind of obvious (we target Russian nuclear missile silos? You don’t say…). The reason, of course, is that if you can go very closely over a target list, you can “debug” the mind of the nuclear strategist who made it — what they thought was important, what they knew, and what they would do about their knowledge. Though times have changed a lot since 1956, a lot of those assumptions are probably still at least partially valid today, so they tend to keep that sort of thing under wraps. These NUKEMAP “experiments” are quick and cheap approaches to making sense of this new information, and as the creator of the NUKEMAP, let me say that I think “quick and cheap” is meant as a compliment. To analyze something quickly and cheaply is to spark new ideas quickly and cheaply, and you can always subject your new ideas to more careful analytical scrutiny once you’ve had them. I hope that someone in the future will give this target data some real careful attention, because I have no doubt that it still contains many insights and surprises.


Silhouettes of the bomb

by Alex Wellerstein, published April 22nd, 2016

You might think of the explosive part of a nuclear weapon as the “weapon” or “bomb,” but in the technical literature it has its own kind of amusingly euphemistic name: the “physics package.” This is the part of the bomb where the “physics” happens — which is to say, where the atoms undergo fission and/or fusion and release energy measured in the tons of TNT equivalent.

Drawing a line between that part of the weapon and the rest of it is, of course, a little arbitrary. External fuzes and bomb fins are not usually considered part of the physics package (the fuzes are part of the “arming, fuzing, and firing” system, in today’s parlance), but they’re of course crucial to the operation of the weapon. We don’t usually consider the warhead and the rocket propellant to be exactly the same thing, but they both have to work if the weapon is going to work. I suspect there are many situations where the line between the “physics package” and the rest of the weapon is a little blurry. But, in general, the distinction seems to be useful for the weapons designers, because it lets them compartmentalize out concerns or responsibilities with regards to use and upkeep.

Physics package silhouettes of some of the early nuclear weapon variants. The Little Boy (Mk-1) and Fat Man (Mk-3) are based on the work of John Coster-Mullen. All silhouette portraits are by me — some are a little impressionistic. None are to any kind of consistent scale.

The shape of nuclear weapons was from the beginning one of the most secret aspects about them. The casing shapes of the Little Boy and Fat Man bombs were not declassified until 1960. This was only partially because of concerns about actual weapons secrets — by the 1950s, the fact that Little Boy was a gun-type weapon and Fat Man was an implosion weapon, and their rough sizes and weights, were well-known. They appear to have been kept secret for so long in part because the US didn’t want to draw too much attention to the bombing of the cities, in part because we didn’t want to annoy or alienate the Japanese.

But these shapes can be quite suggestive. The shapes and sizes put limits on what might be going on inside the weapon, and how it might be arranged. If one could have seen, in the 1940s, the casings of Fat Man and Little Boy, one could pretty easily conjecture about their function. Little Boy definitely has the appearance of a gun-type weapon (long and relatively thin), whereas Fat Man clearly has something else going on with it. If all you knew was that one bomb was much larger and physically rounder than the other, you could probably, if you were a clever weapons scientist, deduce that implosion was probably going on. Especially if you were able to see under the ballistic casing itself, with all of those conspicuously-placed wires.

In recent years we have become rather accustomed to seeing pictures of retired weapons systems and their physics packages. Most of them are quite boring, a variation on a few themes. You have the long-barrels that look like gun-type designs. You have the spheres or spheres-with-flat ends that look like improved implosion weapons. And you then have the bullet-shaped sphere-attached-to-a-cylinder that seems indicative of the Teller-Ulam design for thermonuclear weapons.

Silhouettes of compact thermonuclear warheads. Are the round ends fission components, or spherical fusion components? Things the nuke-nerds ponder.

There are a few strange things in this category, that suggest other designs. (And, of course, we don’t have to rely on just shapes here — we have other documentation that tells us about how these might work.) There is a whole class of tactical fission weapons that seem shaped like narrow cylinders, but aren’t gun-type weapons. These are assumed to be some form of “linear implosion,” which somewhat bridges the gap between implosion and gun-type designs.

All of this came to mind recently for two reasons. One was the North Korean photos that went around a few weeks ago of Kim Jong-un and what appears to be some kind of component to a ballistic case for a miniaturized nuclear warhead. I don’t think the photos tell us very much, even if we assume they are not completely faked (and with North Korea, you never know). If the weapon casing is legit, it looks like a fairly compact implosion weapon without a secondary stage (this doesn’t mean it can’t have some thermonuclear component, but it puts limits on how energetic it can probably be). Which is kind of interesting in and of itself, especially since it’s not every day that you get to see even putative physics packages of new nuclear nations.

Stockpile milestones chart from Pantex's website. Lots of interesting little shapes.

Stockpile milestones chart from Pantex’s website. Lots of interesting little shapes.

The other reason it came to mind is a chart I ran across on Pantex’s website. Pantex was more or less a nuclear-weapons assembly factory during the Cold War, and is now a disassembly factory. The chart is a variation on one that has been used within the weapons labs for a few years now, my friend and fellow-nuclear-wonk Stephen Schwartz pointed out on Twitter, and shows the basic outlines of various nuclear weapons systems through the years. (Here is a more up-to-date one from the a 2015 NNSA presentation, but the image has more compression and is thus a bit harder to see.)

For gravity bombs, they tend to show the shape of the ballistic cases. For missile warheads, and more exotic weapons (like the “Special Atomic Demolition Munitions,” basically nuclear land mines — is the “Special” designation really necessary?), they often show the physics package. And some of these physics packages are pretty weird-looking.

Some of the weirder and more suggestive shapes in the chart. The W30 is a nuclear land mine; the W52 is a compact thermonuclear warhead; the W54 is the warhead for the Davy Crockett system, and the W66 is low-yield thermonuclear weapon used on the Sprint missile system.

A few that jump out as especially odd:

  • PowerPoint Presentation

    Is the fill error meaningful, or just a mistake? Can one read too much into a few blurred pixels?

    In the Pantex version (but not the others), the W59 is particular in that it has an incorrectly-filled circle at the bottom of it. I wonder if this is an artifact of the vectorization process that went into making these graphics, and a little more indication of the positioning of things than was intended.

  • The W52 has a strange appearance. It’s not clear to me what’s going on there.
  • The silhouette of the W30 is a curious one (“worst Tetris piece ever” quipped someone on Twitter), though it is of an “Atomic Demolition Munition” and likely just shows some of the peripheral equipment to the warhead.
  • The extreme distance between the spherical end (primary?) and the cylindrical end (secondary?) of the W-50 is pretty interesting.
  • The W66 warhead is really strange — a sphere with two cylinders coming out of it. Could it be a “double-gun,” a gun-type weapon that decreases the distance necessary to travel by launching two projectiles at once? Probably not, given that it was supposed to have been thermonuclear, but it was an unusual warhead (very low-yield thermonuclear) so who knows what the geometry is.

There are also a number of warheads whose physics packages have never been shown, so far as I know. The W76, W87, and W88, for example, are primarily shown as re-entry vehicles (the “dunce caps of the nuclear age” as I seem to recall reading somewhere). The W76 has two interesting representations floating around, one that gives no real feedback on the size/shape of the physics package but gives an indication of its top and bottom extremities relative to other hardware in the warhead, another that portrays a very thin physics package that I doubt is actually representational (because if they had a lot of extra space, I think they’d have used it).

Some of the more simple shapes — triangles, rectangles, and squares, oh my!

Some of the more simple shapes — triangles, rectangles, and squares, oh my!

What I find interesting about these secret shapes is that on the one hand, it’s somewhat easy to understand, I suppose, the reluctance to declassify them. What’s the overriding public interest for knowing what shape a warhead is? It’s a hard argument to make. It isn’t going to change how to vote or how we fund weapons or anything else. And one can see the reasons for keeping them classified — the shapes can be revealing, and these warheads likely use many little tricks that allow them to put that much bang into so compact a package.

On the other hand, there is something to the idea, I think, that it’s hard to take something seriously if you can’t see it. Does keeping even the shape of the bomb out of public domain impact participatory democracy in ever so small a way? Does it make people less likely to treat these weapons as real objects in the world, instead of as metaphors for the end of the world? Well, I don’t know. It does make these warheads seem a bit more out of reach than the others. Is that a compelling reason to declassify their shapes? Probably not.

As someone on the “wrong side” of the security fence, I do feel compelled to search for these unknown shapes — a defiant compulsion to see what I am not supposed to see, perhaps, in an act of petty rebellion. I suspect they look pretty boring — how different in appearance from, say, the W80 can they be? — but the act of denial makes them inherently interesting.


Maintaining the bomb

by Alex Wellerstein, published April 8th, 2016

We hear a lot about the benefits of “innovation” and “innovators.” It’s no small wonder: most of the stories we tell about social and technological “progress” are about a few dedicated people coming up with a new approach and changing the world. Historians, being the prickly and un-fun group that we are, tend to cast a jaundiced eye at these kinds of stories. Often these kinds of cases ignore the broader contextual circumstances that were required for the “innovation” to appear or take root, and often the way these are told tend to make the “innovator” seem more “out of their time” than they really were.

The "logo" of the Maintainers conference, which graces its T-shirts (!) and promotional material. I modeled the manhole design off of an actual manhole cover here in Hoboken (photograph taken by me).

The “logo” of the Maintainers conference, which graces its T-shirts (!) and promotional material. I modeled the manhole design off of an actual manhole cover here in Hoboken (photograph taken by me).

Two of my colleagues (Andy Russell and Lee Vinsel) at the Science and Technology Studies program here at the Stevens Institute of Technology (official tagline: “The Innovation University“) have been working on an antidote to these “innovation studies.” This week they are hosting a conference called “The Maintainers,” which focuses on an alternative view of the history of technology. The core idea (you can read more on the website) is that the bulk of the life and importance of a technology is not in its moment of “innovation,” but in the “long tail” of its existence: the ways in which it gets integrated into society, needs to be constantly repaired and upgraded, and can break down catastrophically if it loses its war against entropy. There is a lot of obvious resonance with infrastructure studies and stories in the news lately about what happens if you don’t repair your water systems, bridges, subway trains, and you-name-it.

I’ve been thinking about how this approach applies to the history and politics of nuclear weapons. It’s pretty clear from even a mild familiarity with the history of the bomb that most of the stories about it are “innovation” narratives. The Manhattan Project is often taken as one of the canonical cases of scientific and technological innovation (in ways that I find extremely misleading and annoying). We hunger for those stories of innovation, the stories of scientists, industry, and the military coming together to make something unusual and exciting. When we don’t think the weapons-acquisition is a good idea (e.g., in the Soviet Union, North Korea, what have you), these innovation stories take on a more sinister tone or get diluted by allusions to espionage or other “help.” But the template is the same. Richard Rhodes’ The Making of the Atomic Bomb is of course one of the greatest works of the innovation narrative of the atomic bomb, starting, as it does, with a virtual lightning bolt going off in the mind of Leo Szilard.

How do you service a Titan II? Very carefully. This is a RFHCO suit, required for being around the toxic fuel and oxidizer. Not the most comfortable of outfits. From Penson's Titan II Handbook.

How do you service a Titan II missile? Very carefully. This is a RFHCO suit, required for being around the toxic fuel and oxidizer. Not the most comfortable of outfits. From Penson’s Titan II Handbook.

What would a history of the bomb look like if we focused on the question of “maintenance”? We don’t have to guess, actually: one already exists. Eric Schlosser’s Command and Control, which I reviewed on here and for Physics Today a few years ago, can be read in that light. Schlosser’s book is about the long-term work it takes to create a nuclear-weapons infrastructure, both in terms of producing the weapons and in terms of making sure they are ready to be used when you want them to be. And, of course, it’s about what can go wrong, either in the course of routine maintenance (the central case-study is that of a Titan II accident that starts when a “maintainer” accidentally drops a socket wrench) or just in the haphazard course of a technology’s life and interactions with the physical world (dropped bombs, crashed planes, things that catch on fire, etc.). (A documentary film based on Schlosser’s book premieres at the Tribeca Film festival this month, along with what sounds like a nuclear rave.)

There are other approaches we might fold into the “maintenance” of the bomb. Donald MacKenzie’s Inventing Accuracy uses the trope of invention, but the meat of the book is really about the way uncertainty about performance and reliability moved between the domains of engineering and policy. Hugh Gusterson’s anthropological study of the Livermore laboratory, Nuclear Rites, is particularly astute about the questions of the day-to-day work at a weapons laboratory and who does it. And the maintenance of infrastructure is a major sub-theme of Stephen Schwartz‘s classic edited volume on the costs of the nuclear complex, Atomic AuditBut these kinds of studies are, I think, rarer than they ought to be — we (and I include myself in this) tend to focus on the big names and big moments, as opposed to the slow-grind of the normal. 

There are two historical episodes that come to my mind when I think about the role of “maintenance” in the history of nuclear weapons. Non-coincidentally, both come at points in history where big changes were in the making: the first right after World War II ended, the second right after the Cold War ended.

Episode 1: The postwar slump

From the very beginning, the focus on the bomb was about its moment of creation. Not, in other words, on what it would take to sustain a nuclear complex. In our collective memory, a “Manhattan Project” is a story of intense innovation and creative invention against all odds. But there’s a lesser-known historical lesson in what happened right after the bombs went off, and it’s worth keeping in mind anytime someone invokes the need for another “Manhattan Project.”

The Manhattan Project, formally begun in late 1942, was consciously an effort to produce a usable atomic bomb in the shortest amount of time possible. It involved massive expenditure, redundant investigations, and involved difficult trade-offs between what would normally considered “research” and “development” phases. Plans for the first industrial-sized nuclear reactors, for example, were developed almost immediately after the first proof-of-concept was shown to work — normal stages of prototyping, scaling, and experimenting were highly compressed from normal industrial practices at the time, a fact noted by the engineers and planners who worked on the project. The rush towards realization of the new technology drove all other concerns. The nuclear waste generated by the plutonium production processes, for example, were stored in hastily-built, single-walled underground tanks that were not expected to be any more than short-term, wartime solutions. When people today refer to the Manhattan Project as a prototypical case of “throw a lot of money and expertise at a short-term problem,” they aren’t entirely wrong (even though such an association leaves much out).

J. Robert Oppenheimer (at right) was proud face of the successful "innovation" of the Manhattan Project. It is telling, though, that he left Los Alamos soon after the war ended. Source: Google LIFE image archive.

J. Robert Oppenheimer (at right) was proud face of the successful “innovation” of the Manhattan Project. It is telling, though, that he left Los Alamos soon after the war ended. Source: Google LIFE image archive.

After the end of World War II, though, the future of the American nuclear complex was uncertain. In my mind this liminal period is as interesting as the wartime period, though it doesn’t get as much cultural screen time. Would the US continue to make nuclear weapons? Would there be an agreement in place to limit worldwide production of nuclear arms (international control)? Would the atomic bomb significantly change US expenditures on military matters, or would it become simply another weapon in the arsenal? What kind of postwar organization would manage the wartime-creations of the Manhattan Project? No one knew the answers to these questions — there was a swirl of contradictory hopes and fears held by lots of different stakeholders.

We know, in the end, what eventually worked out. The US created the civilian Atomic Energy Commission with the Atomic Energy Act of 1946, signed by President Truman in August 1946 (much later than the military had hoped). Efforts towards the “international control” of the atomic bomb fizzled out in the United Nations. The Cold War began, the arms race intensified, and so on.

But what’s interesting to me, here, is that period between the end of the war and things “working out.” Between August 1945 and August 1946, the US nuclear weapons infrastructure went into precipitous decline. Why? Because maintaining it was harder than building it in the first place. What needed to be maintained? First and foremost, there were issues in maintaining the human capital. The Manhattan Project was a wartime organization that dislocated hundreds of thousands of people. The working conditions were pretty rough and tumble — even during the war they had problems with people quitting as a result of them. When the war ended, a lot of people went home. How many? Exact numbers are hard to come by, but my rough estimate based on the personnel statistics in the Manhattan District History is that between August 1945 and October 1946, some 80% of the construction labor left the project, and some 30% of the operations and research labor left. Overall there was a shedding of some 60% of the entire Manhattan Project labor force.

Declines in Manhattan Project personnel from July 1945 through December 1946. Note the dramatic decrease between August and September 1945, and the slow decrease until October 1946, after the Atomic Energy Act was passed and when things started to get on a postwar footing (but before the Atomic Energy Commission fully took over in January 1947).

Declines in Manhattan Project personnel from July 1945 through December 1946. Note the dramatic decrease between August and September 1945, and the slow decrease until October 1946, after the Atomic Energy Act was passed and when things started to get on a postwar footing (but before the Atomic Energy Commission fully took over in January 1947). Reconstructed from this graph in the Manhattan District History.

Now, some of that can be explained as the difference between a “building” project and a “producing” project. Construction labor was already on a downward slope, but the trend did accelerate after August 1945. The dip in operations and research, though, is more troublesome — a steep decline in the number of people actually running the atomic bomb infrastructure, much less working to improve it.

Why did these people leave? In part, because the requirements of a “crash” program and a “long-term” program were very different in terms of labor. It’s more than just the geographical aspect of people going home. It also included things like pay, benefits, and work conditions in general. During the war, organized labor had mostly left the Manhattan Project alone, at the request of President Roosevelt and the Secretary of War. Once peace was declared, they got back into the game, and were not afraid to strike. Separately, there was a prestige issue. You can get Nobel Prize-quality scientists to work on your weapons program when you tell them that Hitler was threatening civilization, that they were going to open up a new chapter in world history, etc. It’s exciting to be part of something new, in any case. But if the job seems like it is just about maintaining an existing complex — one that many of the scientists were having second-thoughts on anyway — it’s not as glamorous. Back to the universities, back to the “real” work.

And, of course, it’s a serious morale problem if you don’t think you laboratory is going to exist in a year or two. When the Atomic Energy Act got held up in Congress for over a year, it introduced serious uncertainty as to the future of Los Alamos. Was Los Alamos solely a wartime production or a long-term institution? It wasn’t clear.

Hanford reactor energy output, detail. Note that it went down after late 1945, and they did not recover their wartime capacity until late 1948. Source: detail from this chart which I got from the Hanford Declassified Document System.

Hanford reactor energy output, detail. Note that it went down after late 1945, and they did not recover their wartime capacity until late 1948. Source: detail from this chart which I got from the Hanford Declassified Document System.

There were also technical dimensions to the postwar slump. The industrial-sized nuclear reactors at Hanford had been built, as noted, without much prototyping. The result is that there was still much to know about how to run them. B Reactor, the first to go online, started to show problems in the immediate postwar. Some of the neutrons being generated from the chain reaction were being absorbed by the graphite lattice that served as the moderator. The graphite, as a result, was starting to undergo small chemical changed: it was swelling. This was a big problem. Swelling graphite could mean that the channels that stored fuel or let the control rods in could get warped. If that happened, the operator would no longer be in full control of the reactor. That’s bad. For the next few years, B Reactor was run on low power as a result, and the other reactors were prevented from achieving their full output until solutions to the problem were found. The result is that the Hanford reactors had around half the total energy output in the immediate postwar as they did during the wartime period — so they weren’t generating as much plutonium.

To what degree were the technical and the social problems intertwined? In the case of Los Alamos we have a lot of documentation from the period which describes the “crisis” of the immediate postwar, when they were hemorrhaging manpower and expertise. We also have some interesting documentation that implies the military was worried about what a postwar management situation might look like, if it was out of the picture — if the nuclear complex was to be run by civilians (as the Atomic Energy Act specified), they wanted to make sure that the key aspects of the military production of nuclear weapons were in “reliable” hands. In any case, the infrastructure, as it was, was in a state of severe decay for about a year as these things got worked out.

I haven't even touched on the issues of "maintaining" security culture — what goes under the term "OPSEC." There is so much that could be said about that, too! Image source: (Hanford DDRS #N1D0023596)

I haven’t even touched on the issues of “maintaining” security culture — what goes under the term “OPSEC.” There is so much that could be said about that, too! Image source: (Hanford DDRS #N1D0023596)

The result of all of this was the greatest secret of the early postwar: the United States had only a small amount of fissile material, a few parts of other bomb components, and no ready-to-use nuclear weapons. AEC head David Lilienthal recalled talking with President Truman in April 1947:

We walked into the President’s office at a few moments after 5:00 p.m. I told him we came to report what we had found after three months, and that the quickest way would be to ask him to read a brief document. When he came to a space I had left blank, I gave him the number; it was quite a shock. We turned the pages as he did, all of us sitting there solemnly going through this very important and momentous statement. We knew just how important it was to get these facts to him; we were not sure how he would take it. He turned to me, a grim, gray look on his face, the lines from his nose to his mouth visibly deepened. What do we propose to do about it?

The “number” in question was the quantity of atomic bombs ready to use in an emergency. And it was essentially zero. Thus the early work of the AEC was re-building a postwar nuclear infrastructure. It was expensive and slow-going, but by 1950 the US could once again produce atomic bombs in quantity, and was in a position to suddenly start producing many types of nuclear weapons again. Thus the tedious work of “maintenance” was actually necessary for the future work of “innovation” that they wanted to happen.

Episode 2: The post-Cold War question

Fast-forward to the early 1990s, and we’re once again in at a key juncture in questions about the weapons complex. The Soviet Union is no more. The Cold War is over. What is the future of the American nuclear program? Does the United States still need two nuclear weapon design laboratories? Does it still need a diverse mix of warheads and launchers? Does it still need the “nuclear triad”? All of these questions were on the table.

What shook out was an interesting situation. The labs would be maintained, shifting their efforts away from the activities we might normally associate with innovation and invention, and towards activities we might instead associate with maintenance. So environmental remediation was a major thrust, as was the work towards “Science-Based Stockpile Stewardship,” which is a fancy term for maintaining the nuclear stockpile in a state of readiness. The plants that used to assemble nuclear weapons have converted into places where weapons are disassembled, and I’ve found it interesting that the imagery associated with these has been quite different than the typical “innovation” imagery — the people shown in the pictures are “technicians” more than “scientists,” and the prevalence of women seems (in my anecdotal estimation) much higher.

The question of what to do with the remaining stockpile is the most interesting. I pose the question like this to my undergraduate engineers: imagine you were given a 1960s Volkswagen Beetle and were told that once you were pretty sure it would run, but you never ran that particular car before. Now imagine you have to keep that Beetle in a garage for, say, 20 or 30 more years. You can remove any part from the car and replace it, if you want. You can run tests of any sort on any single component, but you can’t start the engine. You can build a computer model of the car, based on past experience with similar cars, too. How much confidence would you have in your ability to guarantee, with near 100% accuracy, that the car would be able to start at any particular time?

Their usual answer: not a whole lot. And that’s without telling them that the engine in this case is radioactive, too.

Graph of Livermore nuclear weapons designers with and without nuclear testing experience. The PR spin put on this is kind of interesting in and of itself: "Livermore physicists with nuclear test experience are reaching the end of their careers, and the first generation of stockpile stewards is in its professional prime." Source: Arnie Heller, "Extending the Life of an Aging Weapon," Science & Technology Review (March 2012).

Graph of Livermore nuclear weapons designers with and without nuclear testing experience. The PR spin put on this is kind of interesting in and of itself: “Livermore physicists with nuclear test experience are reaching the end of their careers, and the first generation of stockpile stewards is in its professional prime.” Source: Arnie Heller, “Extending the Life of an Aging Weapon,” Science & Technology Review (March 2012).

Like all analogies there are inexact aspects to it, but it sums up some of the issues with these warheads. Nuclear testing by the United States ceased in 1992. It might come back today (who knows?) but the weapons scientists don’t seem to be expecting that. The warheads themselves were not built to last indefinitely — during the Cold War they would be phased out every few decades. They contain all sorts of complex materials and substances, some of which are toxic and/or radioactive, some of which are explosive, some of which are fairly “exotic” as far as materials go. Plutonium, for example, is metallurgically one of the most complex elements on the periodic table and it self-irradiates, slowly changing its own chemical structure.

Along with these perhaps inherent technical issues is the social one, the loss of knowledge. The number of scientists and engineers at the labs that have had nuclear testing experience is at this point approaching zero, if it isn’t already there. There is evidence that some of the documentary procedures were less than adequate: take the case of the mysterious FOGBANK, some kind of exotic “interstage” material that is used in some warheads, which required a multi-million dollar effort to come up with a substitute when it was discovered that the United States no longer had the capability of producing it.

So all of this seems to have a pretty straightforward message, right? That maintenance of the bomb is hard work and continues to be so. But here’s the twist: not everybody agrees that the post-Cold War work is actually “maintenance.” That is, how much of the stockpile stewardship work is really just maintaining existing capability, and how much is expanding it?

Summary of the new features of the B-61 Mod 12, via the New York Times.

Old warheads in new bottles? Summary of the new features of the B-61 Mod 12, via the New York Times.

The B-61 Mod 12 has been in the news a bit lately for this reason. The B-61 is a very flexible warhead system that allows for a wide range of yield settings for a gravity bomb. The Mod 12 has involved, among other things, an upgraded targeting and fuzing capability for this bomb. This makes the weapon very accurate and allows it to penetrate some degree into the ground before detonating. The official position is that this upgrade is necessary for the maintenance of the US deterrence position (it allows it, for example, to credibly threaten underground bunkers with low-yield weapons that would reduce collateral damage). So now we’re in a funny position: we’re upgrading (innovating?) part of a weapon in the name of maintaining a policy (deterrence) and ideally with minimal modifications to the warhead itself (because officially we are not making “new nuclear weapons”). Some estimates put the total cost of this program at a trillion dollars — which would be a considerable fraction of the total money spent on the entire Cold War nuclear weapons complex.

There are other places where this “maintenance” narrative has been challenged as well. The labs in the post-Cold War argued that they could only guarantee the stockpile’s reliability if they got some new facilities. Los Alamos got DARHT, which lets them take 3-D pictures of implosion in realtime, Livermore got NIF, which lets them play with fusion micro-implosions using a giant laser. A lot of money has been put forward for this kind of “maintenance” activity, and as you can imagine there was a lot of resistance. With all of it has come the allegations that, again, this is not really necessary for “maintenance,” that this is just innovation under the guise of maintenance. And if that’s the case, then that might be a policy problem, because we are not supposed to be “innovating” nuclear weapons anymore — that’s the sort of thing associated with arms races. For this reason, one major effort to create a warhead design that was alleged to be easier to maintain, the Reliable Replacement Warhead, was killed by the Obama administration in 2009.

"But will it work?" With enough money thrown at the problem, the answer is yes, according to Los Alamos. Source: National Security Science (April 2013).

“But will it work?” With enough money thrown at the problem, the answer is yes, according to Los Alamos. Source: National Security Science (April 2013).

So there has been a lot of money in the politics of “maintenance” here. What I find interesting about the post-Cold War moment is that “maintenance,” rather than being the shabby category that we usually ignore, has been moved to the forefront in the case of nuclear weapons. It is relatively easy to argue, “yes, we need to maintain these weapons, because if we don’t, there will be terrible consequences.” Billions of dollars are being allocated, even while other infrastructures in the United States are allowed to crumble and decline. The labs in particular have to walk a funny line here. They have an interest in emphasizing the need for further maintenance — it’s part of their reason for existence at this point. But they also need to project confidence, because the second they start saying that our nukes don’t work, they are going to run into even bigger policy problems.

And yet, it has been strongly alleged that under this cloak of maintenance, a lot of other kinds of activities might be taking place as well. So here is a perhaps an unusual politics of maintenance — one of the few places I’ve seen where there is a substantial community arguing against it, or at least against using it as an excuse to “innovate” on the sly.


My conversation on secrecy with a Super Spook

by Alex Wellerstein, published March 18th, 2016

One of the unexpected things that popped up on my agenda this last week: I was asked to give a private talk to General Michael Hayden, the former director of the National Security Agency (1999-2005), and the Central Intelligence Agency (2006-2009). Hayden was at the Stevens Institute of Technology (where I work) giving a talk in the President’s Distinguished Lecture Series, and as with all such things, part of the schedule was to have him get a glimpse of the kinds of things we are up to at Stevens that he might find interesting.

The group that met with General Michael Hayden last Wednesday. Hayden is second from left at the far side of the table. The President of Stevens, Nariman Farvardin, is nearest to the camera. I am at the table, at the back. All photos by Jeffrey Vock photography, for Stevens.

The group that met with General Michael Hayden last Wednesday. Hayden is second from left at the far side of the table. The President of Stevens, Nariman Farvardin, is nearest to the camera. I am at the table, at the back. All photos by Jeffrey Vock photography, for Stevens.

What was strange, for me, was that I was being included as one of those things. I am sure some of my readers and friends will say, “oh, of course they wanted you there,” but I am still a pretty small fry over here, an assistant professor in the humanities division of an engineering school. The other people who gave talks either ran large laboratories or departments with obvious connections to the kinds of things Hayden was doing (e.g., in part because of its proximity to the Hudson River, Stevens does a lot of very cutting-edge work in monitoring boat and aerial vehicle traffic, and its Computer Science department does a huge amount of work in cybersecurity). That a junior historian of science would be invited to sit “at the table” with the General, the President of the Institute, and a handful of other Very Important People is not at all obvious, so I was surprised and grateful for the opportunity.

So what does the historian of secrecy say to one of the “Super Spooks,” as my colleague, the science writer (and critic of US hegemony and war) John Horgan, dubbed Hayden? I pitched two different topics to the Stevens admin — one was a talk about what the history of secrecy might tell us about the way in which secrecy should be talked about and secrecy reform should be attempted (something I’ve been thinking about and working on for some time, a policy-relevant distillation of my historical research), the other was a discussion of NUKEMAP user patterns (which countries bomb who, using a dataset of millions of virtual “detonations” from 2013-2016). They opted for the first one, which surprised me a little bit, since it was a lot less numbers-driven and outward-facing than the NUKEMAP talk.

Yours truly. As you will notice, there was a lot of great gesturing going on all around while I was talking. I am sure a primatologist could make something out of this.

Yours truly. As you will notice, there was a lot of great gesturing going on all around while I was talking. I am sure a primatologist could make something out of this.

The talk I pitched to the General covered a few distinct points. First, I felt I needed to quickly define what Science and Technology Studies (STS) was, as that is the program I was representing, and it is not extremely well-known discipline outside of academia. (The sub-head was, “AKA, Why should anyone care what a historian of science thinks about secrecy?”) Now those who practice STS know that there have been quite a few disciplinary battles of what STS is meant to be, but I gave the basic overview: STS is an interdisciplinary approach by humanists and social scientists that studies science and technology and their interactions with society. STS is sort of an umbrella-discipline that blends the history, philosophy, sociology, and anthropology of science and technology, but also is influenced, at times, by things like the study of psychology, political science, and law, among many other things. It is generally empirical (but not always), usually qualitative, but sometimes quantitative in its approach (e.g. bibliometrics, computational humanities). In short, I pitched, while lots of people have opinions about how science and technology “work” and what their relationship is with society (broadly construed), STS actually tries to apply academic rigor (of various degrees and definitions) to understanding these things.

Hayden was more receptive to the value of this than I might have guessed, but this seemed in part to be because he majored in history (for both a B.A. and M.A., Wikipedia tells me), and has clearly done a lot of reading around in political science. Personally I was pretty pleased with this, just because we historians, especially at an engineering school, often get asked what one can do with a humanities degree. Well, you can run the CIA and the NSA, how about that!


I then gave a variation on talks I have given before on the history of secrecy in the United States, and what some common misunderstandings are. First, I pointed out that there are some consequences in just acknowledging that secrecy in the US has a history at all — that it is not “transhistorical,” having existed since time immemorial. You can pin-point to beginnings of modern secrecy fairly precisely: World War I has the emergence of many trends that become common later, like the focus on “technical” secrets and the first law (the Espionage Act) that applies to civilians as well as military. World War II saw a huge, unrelenting boom of the secrecy system, with (literally) overflow amounts of background checks (the FBI had to requisition the DC Armory and turn it into a file vault), the rise of technical secrecy (e.g. secrecy of weapons designs), the creation of new classification categories (like “Top Secret,” created in 1944), and, of course, the Manhattan Project, whose implementation of secrecy was in some ways quite groundbreaking. At the end of World War II, there was a curious juncture where some approaches to classification were handled in a pre-Cold War way, where secrecy was really just a temporary situation due to ongoing hostilities, and some started to shift towards a more Cold War fashion, where secrecy became a facet of American life.

The big points are — and this is a prerequisite for buying anything else I have to say about the topic — that American secrecy is relatively new (early-to-mid 20th century forward), that it had a few definite points of beginning, that the assumption that the world was full of increasingly dangerous information that needed government regulation was not a timeless one, and that it had changed over time in a variety of distinct and important ways. In short, if you accept that our secrecy is the product of people acting in specific, contingent circumstances, it stops you from seeing secrecy as something that just “has to be” the way it is today. It has been otherwise, it could have been something else, it can be something else in the future: the appeal to contingency, in this case, is an appeal to agency, that is, the ability for human beings to modify the circumstances under which they find themselves. This is, of course, one of the classic policy-relevant “moves” by historians: to try and show that the way the world has come to be isn’t the only way it had to be, and to try and encourage a belief that we can make choices for how it ought to be going forward.


General Hayden seemed to accept all of this pretty well. I should note that throughout the talk, he interjected with thoughts and comments routinely. I appreciated this: he was definitely paying attention, to me and to the others. I am sure he has done things like this all the time, visiting a laboratory or university, being subjected to all manner of presentations, and by this point he was post-lunch, a few hours before giving his own talk. But he stayed with it, for both me and the other presenters.

The rest of my talk (which was meant to be only 15 minutes, though I think it was more towards 25 with all of the side-discussions), was framed as “Five myths about secrecy that inhibit meaningful policy discussion and reform.” I’m not normally prone to the “five myths” sort of style of talking about this (it is more Buzzfeed than academic), but for the purpose of quickly getting a few arguments across I thought it made for an OK framing device. The “myths” I laid out were as follows

Myth: Secrecy and democracy necessarily conflict. This is the one that will make some my readers blanche at first read, but my point is that there are areas of society where some forms of secrecy need to exist in order to encourage democracy in the first place, and there are places where transparency can itself be inhibiting. The General (unsurprisingly?) was very amenable to this. I then make the move that the trick is to make sure we don’t get secrecy in the areas where it does conflict with democracy. The control of information can certainly conflict with the need for public understanding (and right-to-know) that makes an Enlightenment-style democracy function properly. But we needn’t see it as an all-or-nothing thing — we “just” have to make sure the secrecy is where it ought to be (and with proper oversight), and the transparency is where it ought to be. Hayden seemed to agree with this.

My best look.

I suspect I look like this more than I wish I did.

Myth: Secrecy and security are synonymous. Secrecy is not the same thing as security, but they are often lumped together (both consciously and not). Secrecy is the method, security is the goal. There are times when secrecy promotes security — and there are times in which secrecy inhibits it. This, I noted, was one of the conclusions of the 9/11 Commission Report as well, that lack of information sharing had seriously crippled American law enforcement and intelligence with regards to anticipating the attacks of 2001. I also pointed out that the habitual use of secrecy led to its devaluation — that when you start stamping “TOP SECRET” on everything, it starts to mean a lot less. The General strongly agreed with this. He also alluded to the fact that nobody ought to be storing any kind of government e-mails on private servers thees days, because the system was so complicated that literally nobody ever knew if they were going to be generating classified information or not — and that this is a problem.

I also noted that an impressiontrue or not, that secrecy was being rampantly misapplied had historically had tremendously negative affects on public confidence in governance, which can lead to all sorts of difficulties for those tasked with said governance. Hayden took to this point specifically, thought it was important, and brought up an example. He said that the US compromise of the 1970s was to get Congressional “buy-in” to any Executive or federal classified programs through oversight committees. He argued that the US, in this sense, was much more progressive with regards to oversight than many European security agencies, who essentially operate exclusively under the purview of the Executive. He said that he thought the NSA had done a great job of getting everything cleared by Congress, of making a public case for doing what it did. But, he acknowledged that clearly this effort had failed — the public did not have a lot of confidence that the NSA was being properly seen over, or that its actions were justified. He viewed this as a major problem for the future, how US intelligence agencies will operate within the expectations of the American people. I seem to recall him saying (I am reporting this from memory) that this was just part of the reality that US intelligence and law enforcement had to learn to live with — that it might hamper them in some ways, but it was a requirement for success in the American context.

I forget what provoked this response, but I couldn't not include it here.

I forget what provoked this response, but I couldn’t not include it here.

Myth: Secrecy is a wall. This is a little, small intervention I made in terms of the metaphors of secrecy. We talk about it as walls, as cloaks, and curtains. The secrecy-is-a-barrier metaphor is perhaps the most common (and gets paired a lot with information-is-a-liquid, e.g. leaks and flows), and, if I can channel the thesis of the class I took with George Lakoff a long time ago, metaphors matter. There is not a lot you can do with a wall other than tolerate it, tear it down, find a way around, etc. I argued here that secrecy definitely feels like a wall when you are on the other “side” of it — but it is not one. If it was one, it would be useless for human beings (the only building made of nothing but walls is a tomb). Secrecy is more like a series of doors. (Doors, in turn, are really just temporary walls. Whoa.) Doors act like walls if you can’t open them. But they can be opened — sometimes by some people (those with keys, if they are locked), sometimes by all people (if they are unlocked and public). Secrecy systems shift and change over time. Who has access to the doors changes as well, sometimes over time. This comes back to the contingency issue again, but also refocuses our attention less on the fact secrecy itself but how it is used, when access is granted versus withheld, and so on. As a historian, my job is largely to go through the doors of the past that used to be locked, but are now open for the researcher.

Myth: Secrecy is monolithic. That is, “Secrecy” is one thing. You have it or you don’t. As you can see from the above, I don’t agree with this approach. It makes government secrecy about us-versus-them (when in principle “they” are representatives of “us”), it makes it seem like secrecy reform is the act of “getting rid of” secrecy. It make secrecy an all-or-nothing proposition. This is my big, overarching point on secrecy: it isn’t one thing. Secrecy is itself a metaphor; it derives from the Latin secerno: to separate, part, sunder; to distinguish; to set aside. It is about dividing the world into categories of people, information, places, things. This is what “classification” is about and what it means: you are “classifying” some aspects of the world as being only accessible to some people of the world. The metaphor doesn’t become a reality, though, without practices (and here I borrow from anthropology). Practices are the human activities that make the idea or goal of secrecy real in the world. Focus on the practices, and you get at the heart of what makes a secrecy regime tick, you see what “secrecy” means at any given point in time.

And, per my earlier emphasis on history, this is vital: looking at the history of secrecy, we can see the practices move and shift over time, some coming into existence at specific points for specific reasons (see, e.g., my history of secret atomic patenting practices during World War II), some going away over times, some getting changed or amplified (e.g., Groves’ amplification of compartmentalization during the Manhattan Project — the idea preceded Groves, but he was the one who really imposed it on an unprecedented scale). We also find that some practices are the ones that really screw up democratic deliberation, and some of them are the ones we think of as truly heinous (like the FBI’s COINTELPRO program). But some are relatively benign. Focusing on the practices gives us something to target for reform, something other than saying that we need “less” secrecy. We can enumerate and historicize the practices (I have identified at least four core practices that seem to be at the heart of any secrecy regime, whether making an atomic bomb or a fraternity’s initiation rites, but for the Manhattan Project there were dozens of discrete practices that were employed to try and protect the secrecy of the work). We can also identify which practices are counterproductive, which ones fail to work, which ones produce unintended consequences. A practice-based approach to secrecy, I argue, is the key to transforming our desires for reform into actionable results.

Hayden's lecture in De Baum auditorium, at Stevens.

Hayden’s lecture in De Baum auditorium, at Stevens.

Myth: The answer to secrecy reform is balance. A personal pet peeve of mine are appeals to “balance” — we need a “balance of secrecy and transparency/openness/democracy,” what have you. It sounds nice. In fact, it sounds so nice that literally nobody will disagree with it. The fact that the ACLU and the NSA can both agree that we need to have balance is, I think, evidence that it means nothing at all, that it is a statement with no consequences. (Hayden seemed to find this pretty amusing.) The balance argument commits many the sins I’ve already enumerated. It assumes secrecy (and openness) are monolithic entities. It assumes you can get some kind of “mix” of these pure states (but nobody can articulate what that would look like). It encourages all-or-nothing thinking about secrecy if you are a reformer. Again, the antidote for this approach is a focus on practices and domains: we need practices of secrecy and openness in different domains in American life, and focusing on the effects of these practices (or their lack of existence) gives us actionable steps forward.

I should say explicitly: I am not an activist in any way, and my personal politics are, I like to think, rather nuanced and subtle. I am sure one can read a lot of “party lines” into the above positions if one wants to, but I generally don’t mesh well with any strong positions. I am a historian and an academic — I do a lot of work trying to see the positions of all sides of a debate, and it rubs off on me that people of all positions can make reasonable arguments, and that there are likely no simple solutions. That being said, I don’t think the current system of secrecy works very well, either from the position of American liberty or the position of American security. As I think I make clear above, I don’t accept the idea that these are contradictory goals.

Hayden seemed to take my points well and largely agree with them. In the discussion afterwards, some specific examples were brought up. I was surprised to hear (and he said it later in his talk, so I don’t think this is a private opinion) that he sided with Apple in the recent case regarding the FBI and “cracking” the iPhone’s security. He felt that while the legal and Constitutional issues probably sat in the FBI’s camp, he thought the practice of it was a bad idea: the security compromise for all iPhones would be too great to be worth it. He didn’t buy the argument that you could just do it once, or that it would stay secret once it was done. I thought this was a surprising position for him to take.

In general, Hayden seemed to agree that 1. the classification system as it exists was not working efficiently or effectively, 2. that over-classification was a real problem and led to many of the huge issues we currently have with it (he called the Snowden leaks “an effect and not a cause”), 3. that people in the government are going to have to understand that the “price of doing business” in the United States was accepting that you would have to make compromises in what you could know and what you could do, on account of the needs of our democracy.

Hayden's last slide: "Buckle up: It's going to be a tough century." Though the last one was no walk in the park, either...

Hayden’s last slide: “Buckle up: It’s going to be a tough century.” Though I know he’d agree that the last one was no walk in the park, either…

Hayden then went and gave a very well-attended talk followed by a Q&A session. I live-Tweeted the whole thing; I have compiled my tweets into a Storify, if you want to get the gist of what he said. He is also selling a new book, which I suspect has many of these same points in it.

My concluding thoughts: I don’t agree with a lot of Hayden’s positions and actions. I am a lot less confident than he is that the NSA’s work with Congress, for example, constitutes appropriate oversight (it is plainly clear that Congressional committees can be “captured” by the agencies they oversee, and with regards to the NSA in particular, there seems to have been some pretty explicit deception involved in recent years). I am not at all confident that drone strikes do a net good in the regions in which we employ them. I am deeply troubled by things like extraordinary rendition, Guantanamo Bay, water boarding, and anything that shades towards torture, a lack of adherence towards laws of war, or a lack of adherence towards the basic civil liberties that our Constitution articulates as the American idea. Just to put my views on the table. (And to make it clear, I don’t necessarily think there are “simple” solutions to the problems of the world, the Middle East, to America. But I am deeply, inherently suspicious that the answer to any of them involves doing things that are so deeply oppositional to these basic American military and Constitutional values.)

But, then again, I’d never be put in charge of the NSA or the CIA, either, and there’s likely nobody who would ever be put in charge of said organizations that I would agree with on all fronts. What I did respect about Hayden is that he was willing to engage. He didn’t really shirk from questions. He also didn’t take the position that everything that the government has done, or is doing, is golden. But most important, for me, was that he took some rather nuanced positions on some tough issues. The core of what I heard him say repeatedly was that the Hobbesian dilemma — that the need for security trumps all — could not be given an absolute hand in the United States. And while we might disagree on how that works out in practice, that he was willing to walk down that path, and not merely be saying it as a platitude, meant something to me. He seemed to be speaking quite frankly, and not just a party or policy line. That’s a rare thing, I think, for former high-ranking public officials (and not so long out of office) who are giving public talks — usually they are quite dry, quite unsurprising. Hayden, whether you agree or disagree with him, is neither of these things.