Articles

The President and the Bomb, Part IV

by Alex Wellerstein, published January 8th, 2020

It’s been awhile since I’ve written anything in my “President and the Bomb” series. You can read part I, II, and III if you are interested). It’s not that I’ve been inactive; much to the contrary, I’ve been researching this issue as one of my major research agendas, but most of that work has not yet seen the light of day. You can read a version of my work on comparative nuclear command and control schemes here, to give you a flavor of it. (Perhaps ironically, the more I am researching something professionally, the less likely it is to appear on the blog, because I’m tailoring it for other venues.)

Posed photograph that the White House released of their "situation room" crew "monitoring developments in the raid that took out Islamic State leader Abu Bakr al-Baghdadi."

The obviously-posed photograph that the White House released of their “situation room” crew “monitoring developments in the raid that took out Islamic State leader Abu Bakr al-Baghdadi.” I use this just to illustrate that in a situation like the one this post describes, the number of people — and ideas — in the room is likely going to be very limited, and intellectual, ideological, and any other forms of diversity are likely to be very low. And only one person — the one at the head of the table — actually makes the nuclear use decision. Source: Washington Post, October 2019.

The recent weeks’ events — allegedly Iranian attack on the US Embassy in Iraq, the assassination of General Suleimani, and the retaliation by Iranian forces on US military bases in Iraq — have me (like everyone else who is not a warmonger) feeling uneasy. Not only because it looks like careless escalation, but because it fits well into how I’ve been thinking about what one of the most probable “next use” of nuclear weapons might be, and what a lot of our existing “presidential control” gets wrong, in my opinion. 

Usually the question of “whose finger should be on the button” is framed in terms of what is sometimes called the “crazy President problem.” This is not a reference to the current President (except when it is meant that deliberately); it’s an imagined scenario that goes like this: the US President wakes up one day, and, out of the blue, decides to start thermonuclear war. Do the generals comply? (Note that some of the other characters sometimes introduced into this — like “Does the Secretary of Defense refuse?” — are red herrings, because they are not strictly in the chain of command. See my Part III post.)

I get the rhetorical attraction of this way of framing the issue: it makes the issue of unilateral nuclear control very acute. But it’s not very realistic. Why not? Because a) that isn’t how mental illness works (it tends not to flare up in a totally unexpected way among otherwise “sane” people), and b) this is actually the easiest form of this problem to refute. Because a general can say, well, if a nuclear use order came “out of context” — this is their term, and means “out of the blue, without any threat against us” — then of course they would refuse it. Which I more or less believe is true, if you imagine this scenario happening at all.

Screenshot of "Trump Denies Asking Staff to Look into Nuking Hurricanes" from Vanity Fair

I don’t know if Trump actually asked his staff, repeatedly, whether or not hurricanes could be nuked — that I find it plausible perhaps says enough about me and my perceptions of him. This is not the normal framing of the “crazy President” problem but you can see how it feeds into it. Screenshot from Vanity Fair, 26 August 2019.

A much more probable scenario for US nuclear first use, for me, looks like this: a crisis builds in a region where there have historically been crises. There are legitimate security threats from and in that region. Something happens that pushes the President to want to respond with something “big.” The military gives him their standard three options (something bland, something insane, something sensible) with the hope it will force a sensible choice. Sound familiar so far? This is what the reporting on the Suleimani assassination says actually happened.

At this point we ask, would “the extreme option” ever be something like a nuclear attack? I very much doubt that it would be what most people think a nuclear option would look like (“wipe country X off the map”). Aside from being unambiguously a war crime (even by the quite flexible standards used by the military to evaluate strikes as war crimes), it just doesn’t match with my perception of how the military (from what little I know of them) think about how nuclear weapons might be plausibly used. So I don’t worry about that.

Could the “extreme option” be, “use a low-yield, high-accuracy nuclear weapon against an underground, unambiguously military site, that is relatively isolated from civilians?” Now we’re getting much more plausible. Most people, I think, would not consider something like this to be a good idea — we’re trained, rightly or wrongly, to see nuclear weapons as being inherently “large,” as things that necessarily kill many civilians, and that any first use would spiral out of control. Whether those things are true or not, there are plenty of analysts in academia, think tanks, and the military itself who do not see things this way. They believe nuclear escalation can be avoided, that nukes could just be another tool for the job, and that a low-yield, high-accuracy nuclear weapon (like the B61-12 nuclear gravity bomb, or the proposed Low-Yield Trident) would be useful not only as deterrents for tactical weapon use by another nation (which is to say, Russia), but as tools for both sending a big-but-not-crazy message and for destroying deeply fortified underground facilities. 

Image analysis of the accuracy of the B61-12 bomb, taken from video stills released by Sandia National Laboratories.

Image analysis by Hans Kristensen (Federation of American Scientists) of the accuracy of the B61-12 bomb, taken from video stills released by Sandia National Laboratories. Source.

Now there are many good reasons to think that the tradition of non-use for nuclear weapons is a good thing and should be perpetuated as long as possible. The US benefits from non-use more than it would benefit from use becoming normalized to any extent, as the JASON group concluded during the Vietnam War, when there were rumblings that tactical nuclear weapons might improve the US military situation. The US and its military are far more vulnerable to tactical nuclear weapons than many of our enemies (because we tend to centralize our forces), and we have the largest and most advanced conventional military in the world, and so we can afford to eschew low-yield nuclear use. (Remember that our Cold War interest in low-yield nukes was because we felt that the Soviets had overwhelming conventional forces. That’s not the case anymore — we’re the one’s with the overwhelming conventional forces, and so we’re the ones that other nations would be tempted to use low-yield nuclear weapons against, as an “equalizer.”)

I have met some of the scholars and analysts who think low-yield nuclear use might be not a horrible idea (they might not say it was a good idea), and I don’t have any problem with said people as people. I can even see their way of thinking, because I’m a historian of science and I’m trained to be sympathetic to nearly any point of view. I can’t tell how many of these people actually think low-yield nuclear use would be a good idea, and how many of them are being academically contrarian because the bulk of academic thought on nuclear weapons supports the idea that they shouldn’t be used. I respect academic contrarians (they keep us on our feet, and skepticism is useful), but in the context of actual policy I think such ideas might actually be dangerous, because the people “at the top” might not realize how academia works, and that contrarian arguments might sound appealing but there are frequently reasons that they are, in fact, not believed by most people who study these topics. 

So, to return to the thread, could a low-yield nuclear strike be included among the “extreme” options in such a hypothetical scenario? I think the answer is maybe, though I would still put that as unlikely — but it’s going to depend who draws up the menu of options. As we’ve seen in the last few years, the assumption that high-profile policymakers are all qualified for their positions, are not zealots, do not have views widely out of line with any form of consensus politics, etc., is totally unwarranted. So it’s possible, though it would be extreme indeed.

But what if, during this same set of options, someone whispers into the President’s ear, “what if we did that plan I mentioned the other day?” That is, what if there was a senior White House advisor who somehow got it into their head that a low-yield nuclear weapon would be a good idea, had talked about it previously to the President, and then injected it into the discussion? Might the President bring it up himself? And in that context, would the generals go along with it?

Photograph of a meeting at the Hoover Institution, with my name tag prominently in the foreground

A surreptitiously-taken photograph of a meeting I attended at the Hoover Institution in January 2019, sponsored by the Nautilus Institute, which featured a talk and discussion with Admiral John Hyten, then head of Strategic Command. I found it very revealing in terms of how the nuclear military views the present administration — as essentially a wonderful, exciting blank check. Photograph by me and my terrible phone camera. Hyten is the military officer just to the top-left of my name tag. To his right are George Shultz and David Holloway; to his left is the wonderful, late Janne Nolan. 

I have little doubt that the generals would probably try to persuade the President that this was a bad idea. I suspect the President’s senior cabinet would also try to do so, though I am less certain about this. But what if the President insisted on the nuclear option?

This isn’t a “crazy President” situation. This is a “the President is advocating for something that there are actually many rational arguments in favor of, in a context that might plausibly justify it” situation. That doesn’t mean it’s not a bad idea, one that could lead to a lot of long-term grief for the United States. But there’s a difference between a “bad order” and “an order that can be legally disobeyed.” 

This is why low-yield weapons make me uncomfortable. Not just because they might “lower the threshold of nuclear use,” the common objection to them. It’s also because they can “remove the ability of the military to refuse to follow an awful order.” A low-yield nuclear weapon, on an accurate delivery vehicle, might plausibly kill very few civilians if used against an isolated target. It wouldn’t necessarily fall outside any of the guidelines of proportionality, and for certain types of targets (again, underground bunkers and facilities) you can make a plausible argument to their military necessity relative to conventional weapons (they increase your chance of success dramatically). I think the military would have a very hard time refusing such an order. Even if they knew it was a bad idea, one that would hurt America diplomatically and, in the long term, militarily. 

It sometimes surprises people that when I rank my “most plausible chances for the next nuclear weapons use since Nagasaki,” the idea of the US using one is perhaps the top of the list. This isn’t because I think ill of the country, or even because it is the only country to have ever used nuclear weapons in war. It’s because I’ve talked to, and listened to, enough analysts (military and civilian) to get the feeling that there are a non-trivial number of voices out there who think nukes are “usable,” and that in a system where you only need to convince a single person (the President of the United States) of that point of view, then the possibility of them being used is a lot higher than you might think. (My other main “plausible scenarios” are basically “conventional stand-off with Russia leads to Russia using tactical nuclear weapons in combat” and “North Korea thinks we are going to decapitate them so they attack first“; the likelihood of any of these depends, as always, on the context). 

This is why, in my ideal world, I’d like there to be some kind of additional checks in place on the use of nuclear weapons. At some future point I’ll outline what I think an “ideal system” ought to look like (and I’ll write something on whether No First Use gets us there; I’ve got a post on the history of No First Use proposals in the works), but for now I’ll just say that we need to think not only in terms of massive attacks or “crazy Presidents,” but about the pernicious and highly-plausible (if history is any guide) possibility of somebody with just a bit of bad reasoning in the wrong place at the wrong time. 

News and Notes | Visions

Why NUKEMAP isn’t on Google Maps anymore

by Alex Wellerstein, published December 13th, 2019

When I created the NUKEMAP in 2012, the Google Maps API was amazing.1 It was the best thing in town for creating Javascript mapping mash-ups, cost literally nothing, had an active developer community that added new features on a regular basis, and actually seemed like it was interested in people using their product to develop cool, useful tools.

At left, the original NUKEMAP from 2005; at right, the Google Maps NUKEMAP from 2012.

NUKEMAPs of days gone by: On the left is the original NUKEMAP I made way back in March 2005, which used MapQuest screenshots (and was extremely limited, and never made public) and was done entirely in PHP. I made it for my own personal use and for teaching. At right, the remade original NUKEMAP from 2012, which used the Google Maps API/Javascript.

Today, pretty much all of that is now untrue. The API codebase has stagnated in terms of actually useful features being added (many neat features have been removed or quietly deprecated; the new features being added are generally incremental and lame), which is really quite remarkable given that the Google Maps stand-alone website (the one you visit when you go to Google Maps to look up a map or location) has had a lot of neat features added to it (like its 3-D mode) that have not been ported to the API code (which is why NUKEMAP3D is effectively dead — Google deprecated the Google Earth Plugin and has never replaced it, and no other code base has filled the gap).2

But more importantly, the changes to the pricing model that have been recently put in place are, to put it lightly, insane, and punishing if you are an educational web developer that builds anything that people actually find useful.

NUKEMAP gets around 15,000 hits a day on a slow day, and around 200,000 hits a day per month, and has done this consistently for over 5 years (and it occasionally has spikes of several hundred thousand page views per day, when it goes viral for whatever reason). While that’s pretty impressive for an academic’s website, it’s what I would call “moderately popular” by Internet terms. I don’t think this puts the slightest strain on Google’s servers (who also run, like, all of YouTube). And from 2012 through 2016, Google didn’t charge a thing for this. Which was pretty generous, and perhaps unsustainable. But it encouraged a lot of experimentation, and something like NUKEMAP wouldn’t exist without that. 

In 2016, they started charging. It wasn’t too bad — at most, my bill was around $200 a month. Even that is pretty hard to do out-of-pocket, but I’ve had the good fortune to be associated with an institution (my employers, the College of Arts and Letters at the Stevens Institute of Technology) that was willing to foot the bill. 

But in 2018, Google changes its pricing model, and my bill jumped to more like $1,800 per month. As in, over $20,000 a year. Which is several times my main hosting fees (for all of my websites).

I reached out to Google to find out why this was. Their new pricing sheet is… a little hard to make sense of. Which is sort of why I didn’t see this coming. They do have a “pricing calculator,” though, that lets you see exactly how terrible the pricing scheme is, though it is a little tricky to find and requires having a Google account to access. But if you start playing with the “dynamic map loads” button (there are other charges, but that’s the big one) you can see how expensive it gets, quickly. I contacted Google for help in figuring all this out, and they fobbed me off onto a non-Google “valued partner” who was licensed to deal with corporations on volume pricing. Hard pass, sorry.

Google for Nonprofit's eligibility standards

Google for Nonprofit’s eligibility standards — academics need not apply.

I know that Google in theory supports people using their products for “social causes,” and if one is at a non-profit (as I am), you can apply for a “grant” to defray the costs, assuming Google assume’s you’re doing good. I don’t know how they feel about the NUKEMAP, but in any case, it doesn’t matter: people at educational institutions (even not-for-profit ones, like mine) are disqualified from applying. Why? Because Google wants to capture the educational market in a revenue-generating way, and so directs you to their Google for Education site, which as you will quickly find is based on a very different sort of model. There’s no e-mail contact on the site, as an aside: you have to claim you are representing an entire educational institution (I am not) and that you are interested in implementing Google’s products on your campus (I am not), and if you do all this (as I did, just to get through to them) you can finally talk to them a bit.

There is literally nothing on the website that suggests there is any way to get Google Maps API credit, but they do have a way to request discounted access to the Google Cloud Platform, which appears to be some kind of machine-learning platform, and after sending an e-mail they did say that you could apply for Google Cloud Platform funds to be used for Google Maps API.

By which point I had already, in my heart, given up on Google. It’s just not worth it. Let me outline the reasons:

  • They clearly don’t care about small developers. That much is pretty obvious if you’ve tried to develop with their products. Look, I get that licensing to big corporations is the money-maker. But Google pretends to be developing for more than just them… they just don’t follow through on those hopes.
  • They can’t distinguish between universities as entities, and academics as university researchers. There’s a big difference there, in terms of scale, goals, and resources. I don’t make university IT policy, I do research. 
  • They are fickle. It’s not just the fact that they change their pricing schemes rapidly, it’s not just that they deprecate products willy-nilly. It’s that they push out new products, encourage communities to use them to make “amazing” things, and then don’t support them well over the long term. They let cool projects atrophy and die. Sometimes they sell them off to other companies (e.g., SketchUp), who then totally change them and the business model. Again, I get it: Google’s approach is throwing things at the wall, hoping they stick, and believes in disruption more than infrastructure, etc. etc. etc. But that makes it pretty hard to justify putting all of your eggs in their basket.
  • I don’t want to worry about whether Google will think my work is a “social good,” I don’t want to worry about re-applying every year, I don’t want to worry about the branch of Google that helps me out might vanish tomorrow, and so on. Too much uncertainty. Do you know how hard it is to get in contact with a real human being at Google? I’m not saying they’re impossible — they did help me waive some of the fees that came from me not understanding the pricing policy — but that took literally months to work out, and in the meantime they sent a collection agency after me. 

But most of all: today there are perfectly viable alternatives. Which is why I don’t understand their pricing model change, except in terms of, “they’ve decided to abandon small developers completely.” After a little scouting around, I decided that MapBox completely fit the bill (and whose rates are more like what Google used to charge), and that Leaflet, an open-source Javascript library, could make for a very easy conversion. It took a little work to make the conversion, because Leaflet out of the box doesn’t support the drawing of great circles, but I wrote a plugin that does it.

Screenshot of NUKEMAP 2.65

NUKEMAP as of this moment (version 2.65; I make small incremental changes regularly), with its Mapbox GL + Leaflet codebase. Note that a while back I started showing the 1 psi blast radius as well, because I decided that omitting it caused people to underestimate the area that would be plausibly affected by a nuclear detonation.

Now, even MapBox’s pricing scheme can add up for my level of map loads, but they’ve been extremely generous in terms of giving me “credits” because they support this kind of work. And getting that worked out was a matter of sending an e-mail and then talking to a real person on the phone. And said real person has been extremely helpful, easy to contact, and even reaches out to me at times when they’re rolling out a new code feature (like Mapbox GL) that he thinks will make the site work better and cheaper. Which is to say: in every way, the opposite of Google. 

So NUKEMAP and MISSILEMAP have been converted entirely over to MapBox+Leaflet. The one function that wasn’t easy to port over was the “Humanitarian consequences” (which relies on Google’s Places library), but I’ll eventually figure out a way to integrate that into it.

More broadly, the question I have to ask as an educator is: would I encourage a student to develop in the Google Maps API if they were thinking about trying to make a “break-out” website? Easy answer: no way. With Google, becoming popular (even just “moderately popular”) is a losing proposition: you will find yourself owing them a lot of money. So I won’t be teaching Google Maps in my data visualization course anymore — we’ll be using Leaflet from now on. I apologize for venting, but I figured that even non-developers might be interested in knowing on how these things work “under the hood” and what kinds of considerations go into the choice of making a website these days.

Demonstration of two cases of the NUKEMAP fallout dose exposure tool

A simple example of the kind of thing you can do with NUKEMAP’s new fallout dose exposure tool. At top, me standing out my office (for an entire 24 hours) in the wake of a 20 kt detonation in downtown NYC using the weather conditions that exist as I am posting this: I am very very dead. At bottom, I instead run quickly into the basement bowling alley in the Stevens University Howe Center (my preferred shelter location, because it’s fairly deep inside a multi-story rocky hill, on top of which is a 13 story building), and the same length of time gives me, at most, a slight up-tick in long-term cancer risk.

More positively, I’m excited to announce that a little while back, I added a new feature to NUKEMAP, one I’ve been wanting to implement for some time now. The NUKEMAP’s fallout model (the Miller model) has always been a little hard to make intuitive sense out of, other than “a vague representation of the area of contamination.” I’ve been exploring some other fallout models that could be implemented as well, but in the meantime, I wanted to find a way to make the current version (which has to advantage of being very quick to calculate and render) more intuitively meaningful.

The Miller model’s contours give the dose intensity (in rad/hr) at H+1 hour. So for the “100 rad/hr” contour, that means: “this area will be covered by fallout that, one hour after detonation, had an intensity of 100 rad/hr, assuming that the fallout has actually arrived there at that time.” So to figure out what your exposure on the ground is, you need to calculate when the fallout actually arrives to you (on the wind), what the dose rate is at time of arrival, and then how that dose rate will decrease over the next hours that you are exposed to it. You also might want to know how that is affected by the kind of structure you’re in, since anything that stands between you and the fallout will cut your exposure a bit. All of which makes for an annoying and tricky calculation to do by hand.

So I’ve added a feature to the “Probe location” tool, which allows you to sample the conditions at any given distance from ground zero. It will now calculate the time of fallout arrival (which is based on the distance and the wind settings), the intensity of the fallout at the time of arrival, and then allow you to see what the total dose would be if you were in that area for, say, 24 hours after detonation. It also allows you to apply a “protection factor” based on the kind of building you are in (the protection factor is just a divisor: a protection factor of 10 reduces the total exposure by 10). All of which can be used to answer questions about the human effects of fallout, and the situations in which different kinds of shelters can be effective, or not.3

There are some more NUKEMAP features actively in the works, as well. More on those, soon.

  1. For non-coders: an API is a code library that lets third-party developers use someone else’s services. So the Google Maps API lets you develop applications that use Google Maps: you can say, “load Google Maps into this part of the web page, add an icon to it, make the icon draggable, and when someone clicks a button over here, draw circles around that icon that go out to a given radius, and color the circles this way and that way.” That’s the basics of NUKEMAP’s functionality, more or less. []
  2. Before people e-mail me about how CesiumJS fills the Google Earth Plugin gap — it doesn’t, because it doesn’t give you the global coverage of 3D buildings that you need to make sense of the size of a mushroom cloud. If they change that someday, I’ll take the time to port the code, but I don’t see many signs that this is going to happen, because global 3D building shapes are still something that only Google seems to own. If you do want to render volumetric mushroom clouds in the stand-alone Google Earth program, there is a (still experimental and incomplete) feature in NUKEMAP for exporting cloud shapes as KMZ files. See the NUKEMAP3D page for more information on how to use this. []
  3. I’ll eventually update the NUKEMAP FAQ about how this works, but it just uses Wigner’s standard t-1.2 fission product decay rate formula. []
Redactions

John Wheeler’s H-bomb Blues

by Alex Wellerstein, published December 3rd, 2019

It’s been forever since I’ve updated on here, and I wanted to let you know that not only have I not abandoned this blog project, I’m planning to do a lot more blogging in 2020. I ended up taking a mostly-hiatus from blogging this year both because I got totally overloaded with work in the spring (too much teaching, too much service, hosting a big workshop and expo for the Reinventing Civil Defense project, etc.), and because I needed to get my book totally finished and out the door. But that’s all done now, so I’m looking forward to getting back into things. Lots of exciting things will be happening in 2020, including the publication of my big article on Truman and the Kyoto decision, the publication of my book (Restricted Data: Nuclear Secrecy in the United States), and some other things I can only mysteriously hint at for the moment!

In the meantime, I wanted to announce that an article I’ve been working on for a long time has finally appeared in print: “John Wheeler’s H-bomb Blues,” in the December 2019 issue of Physics Today. It’s a Cold War mystery about an eminent scientist, a secret conspiracy, and six pages of H-bomb secrets that went missing on an overnight train from Philadelphia to Washington, DC. 

Cover page of John Wheeler's H-bomb Blues
There may never be a good time to lose a secret, but some secrets are worse than others to lose, and some times are worse than others to lose them. For US physicist John Archibald Wheeler, January 1953 may have been the absolute worst time to lose the particular secret he lost. The nation was in a fever pitch about Communists, atomic spies, McCarthyism, the House Un-American Activities Committee, Julius and Ethel Rosenberg, and the Korean War. And what Wheeler lost, under the most suspicious and improbable circumstances, was nothing less than the secret of the hydrogen bomb, a weapon of unimaginable power that had first been tested only a month before.

You can read the article for more. If you’re interested as to why I wrote it, see this short blog post I wrote for Physics Today as well, which talks about how one gets ahold of Cold War FBI files of famous physicists (it helps if they are dead). 

Aside from its mysterious vibe (and lurid details, like Wheeler peering at a stranger on a toilet), one of the wonky things I like about this paper is that for me it serves as sort of an approach to the old historiographical question of, “do individual details matter in history, or does the larger context matter?” You can see variations of this debate all over the place, such as the “Great Men vs. Cultural History” takes. 

Two high-contrast photographs of John A. Wheeler

The photographs of Wheeler from his FBI file. Everybody looks pretty bad when they’ve been microfilmed.

For me the answer has always been both — history is both the idiosyncratic details (and individuals), but it is also about the broader shifts. And for me the really interesting moments are when those two levels of scale interact. This story, in which a single person loses a tiny amount of paper, and huge consequences result, is an example of what I mean  by that “interaction.” Wheeler losing the documents does matter in a broad sense (spoiler: it sets off the Oppenheimer security hearings, which have long-standing ramifications for Cold War science), but it only matters because there is a context in which it can matter (a national security state and international situation that imbues six pages of writing with catastrophic meaning). If those six pages of H-bomb secrets were magically transmitted to almost any other historical context, they’d have been meaningless. 

But enough on that. What I also wanted to share with you here are some interesting documents, things referenced in the article but not easily available anywhere else. 

Document from Wheeler's FBI file

The initial memo about the “Wheeler Incident,” January 7, 1953.

First, here is a short (24-page) excerpt from Wheeler’s FBI file, which I received from the FBI in response to a Freedom of Information Act request. I’ve helpfully arranged it chronologically — full FBI files are a terrible, jumbled, repetitive mess that required a lot of careful reading and processing to make sense of. I’ve chosen documents that both illustrate the character and tenor of the “Wheeler incident” investigation, and give some hints as to how important it was seen at the time. The spindly handwriting on some of the pages (e.g., 11) is J. Edgar Hoover’s. (How did the man manage to be creepy in nearly every possible way? Even his handwriting is creepy.) I’ll be releasing the full FBI file sometime in 2020, along with other files in my collection of Cold War physicist FBI files, as part of a collaboration with the Center for History of Physics/Niels Bohr Library and Archives at the American Institute of Physics.1

Image of Wheeler's statement to the FBI

Wheeler’s classified deposition to the FBI about the “Wheeler incident,” March 3, 1953.

Next, we have Wheeler’s March 3, 1953, deposition to the FBI. I’ve collated two versions of this that I have. Both are from the National Nuclear Security Administration, but they were redacted at different times and with slightly different priorities. The color one redacts more weapons information, the black and white one redacts information about FBI agents. If you put them together you get almost the entire story — some details about the lost document are redacted, but the big picture is there. As I’ve written about before, I sort of love the game of getting multiple, differently-declassified copies of documents and comparing them, not only because one feels like one is learning something forbidden, but also because it lets you get inside the mind of the censor a bit — you can see how different concerns lead to different removals.2

Letter of displeasure from Dean to Wheeler, 1953

Dean’s letter of displeasure to Wheeler, April 1, 1953.

Next, I offer up Gordon Dean’s reprimand letter to Wheeler (April 1, 1953), telling him that he had been a very bad boy, but that they were going to look the other way. Considering the penalties for mishandling nuclear secrets include prison time and huge fines, an unhappy letter was pretty much a slap on the wrist. But as Dean told the Joint Committee on Atomic Energy, “We do not see anything we can do above that at the moment. We still want him in the program. He is a very valuable man, and we do not know anything else we can do without cutting off our nose to spite our faces.”3

Cover page of Walker's H-bomb history, 1953

The cover page of Walker’s “Policy and Progress in the H-bomb Program,” January 1, 1953.

Finally — and this is perhaps the real prize here — I offer up a complete (redacted) copy of John Walker’s “Policy and Progress in the H-bomb Program: A Chronology of Leading Events” (January 1, 1953), the conspiratorial H-bomb history that Wheeler was consulting for, and lost several pages from. This was the bureaucratic weapon that was meant to show that Oppenheimer et al. had slowed down the H-bomb program, and is more or less a chronology of work and thought on the H-bomb, with extensive quotations from documents and reports. It is at places heavily redacted. But it’s still pretty interesting on the whole. Reader beware: this work of “history” is biased inasmuch as it does not present full context of the documents and opinions it quotes. As a result is heavily favors Teller and Wheeler’s views of things. Teller in particular wrote many overly-optimistic memos about how easy it would be to make an H-bomb, which nobody but Teller agreed with. But because his memos dominate the record on this, when you put them all in a list, without discussing their problems and shortcomings, it can look like a very strong case. For that context, Gregg Herken’s Brotherhood of the Bomb and Richard Rhodes’ Dark Sun are much more recommended! But this still is a useful document.4

OK, that’s all for now. I may have one more post in 2019 (about why NUKEMAP switched from Google Maps to Mapbox), but otherwise I will see you in 2020!

  1. Document source: Federal Bureau of Investigation, Freedom of Information Act request. []
  2. Document source: National Nuclear Security Administration, Freedom of Information Act requests and FOIA Reading Room. []
  3. Document source: Papers of Gordon Dean, Records of the US Atomic Energy Commission (RG 326), Box 2, “Classified Reader File, 1953.” []
  4. Document source: Records of the Joint Committee on Atomic Energy (RG 128), Series 2, Box 60, Legislative Archives, National Archives and Records Administration, Washington, DC. []
Meditations

Notes on the Hawaii false alarm, one year later

by Alex Wellerstein, published January 13th, 2019

Today is the one-year anniversary of the Hawaii false alarm, in which the Hawaii Emergency Management Agency (HI-EMA) sent out a text alert to thousands of Americans in the Hawaiian islands that told them that a ballistic missile was incoming, that they should take shelter, and that “THIS IS NOT A DRILL.”

I’ve spent the last few days in Hawaii, as part of a workshop hosted by Atomic Reporters and the Stanley Foundation, and sponsored by the Carnegie Corporation of New York, that brought together a few experts (I was one of those) with a large number of journalists (“old” and “new” media alike) to talk about the false alarm and its lessons.

Photo of the author on the beach, wearing inappropriate attire.

You’re looking at the photo documenting the only beach time I got while in Hawaii. Seriously. Thanks to Andrew Futter for taking this picture. You should check out his book, Hacking the Bomb: Cyber Threats and Nuclear Weapons (Georgetown University Press, 2018). I enjoyed getting to spend a few days hanging out with him.

Given that it is supposed to be snowing back home, you’d think a trip to Hawaii would come as a very welcome thing for me, but almost all of my time was spent in windowless rooms, and an eleven-hour flight is no picnic.1 So I spent some time wondering, “why have this workshop here?” I mean, obviously the location is relevant, but practically, what would be different if we had held the same meeting in, say, Los Angeles, or Palo Alto?

Over the days, the answer became very clear. When you are in Hawaii, everyone has a story about their experience of the false alarm. And they’re all different, and they’re all fascinating. On “the mainland,” as they call us, we got only a very small sampling of experiences from those here in Hawaii, often either put together by people who were interested in being very publicly thoughtful about their feelings (like Cynthia Lazaroff, who we heard a talk from, who wrote up her experience for the Bulletin of the Atomic Scientists), or the kind-of-absurd responses that were used as examples for how ridiculous the whole thing was (e.g., the guy who was trying to put his kids down a manhole). Out here, though, every taxi or Lyft driver has their own experience, along with everyone else.

The sign from the base of a tsunami warning siren tower, which is labeled 'Civil Defense Warning Device' and uses an outdated, 1950s Civil Defense logo on it.

“Civil Defense Warning Device” sign, from the base of a tsunami warning siren tower, in Kapolei.

A few of the responses, broadly paraphrased by me (I didn’t record them, and this is not systematic), follow. It is worth remembering that these are coming a year later, and one would expect their memories to be significantly altered by the passage of time, the increased knowledge about what happened, and, of course, the knowledge that, while “NOT A DRILL,” it was a false alarm.

“I didn’t think it was real. I thought, if it was a real thing, they would also sound the sirens.”

This was from a Lyft driver, who looked like she was in her twenties. Hawaii has an extensive emergency management infrastructure for tsunamis. While I went for a long walk early one morning, I passed by one of the sirens for this, and noticed that they still use the same Civil Defense imagery from the 1950s on their equipment. The people of Hawaii know these sirens well, because they test them on the first day of every month. So the Lyft driver’s response was very interesting to me in this respect: she had expectations about what a “real” emergency would be like, and the ballistic missile alert didn’t meet them.

“I follow the North Korea situation closely, so I assumed it had to be false, because I am sure they wouldn’t just launch a missile like that, because it would be suicidal.”

This is a response that one journalist, and one policy analyst, both independently gave, almost word for word. In this example, people who felt they were connected to the broader context of the US-North Korea situation, and felt they understood North Korea’s strategic aims and options, reasoned that an actual attack from North Korea was unlikely, and thus discounted the alert.

The reasoning behind these “discounting” stories, I might suggest, is terrible. To assume an alert is false because it does not meet your expectations is completely silly unless you are much better informed about what a real alert would look like than our Lyft driver apparently was. Is the ballistic missile system and the tsunami system the same one? Would they use the tsunami alert for a missile alert? Could one system be active and the other sabotaged, malfunctioning, or otherwise not activated? These are big questions! In a real emergency it is not worth betting your life that things aren’t working the way you’d expect them to.

And the “context” justification for not believing it is hubris itself. We only see a portion of the total “context” at any one point. Who knows what has happened on the Korean peninsula several hours ago, but hasn’t made it to your ears? If you’re in Pacific Command (or can contact someone there), sure, you might know enough context to discount such an alert. Otherwise, it’s foolish to do so.

The fallacy of both of these reasons for discounting the alert, as an aside, was made very clear when I visited the Pearl Harbor Memorial. Conventional wisdom prior to Pearl Harbor was that Japan did not pose a major threat to the US, and would not dare to attack such a country.2  A “war scare” with Japan and the US had risen up and dissipated a few months before the actual attack, leading many to think the threat had passed, and even on the day of the attack, many soldiers and radar operators on Hawaii discounted what their own eyes saw because they thought it must be some kind of exercise, giving up any possibility of defense prior to the main attack.

One of the local journalists we talked to had more plausible means to discount the alert as false: he could contact a high-enough ranking member of the Hawaii government. That’s not a bad reason to discount it (though even then, would you bet your life on this official being “in the loop?”), and much of our discussions as a group centered around what the role of journalists ought to be in such a crisis situation, if they had information that was not yet released officially.

“I figured it was probably false, but I went into my bathtub anyway. If I were doing it again, I’d have brought a few beers to pass the time.”

I heard a few people say they understood the “take shelter” message to mean that they should get into their bathtub. I’m not sure where they got that — perhaps the television? I am not sure a bathtub is the best place to be; usually the emergency advice regarding bathtubs is to fill them up with water, so that you have several gallons of potable water in case there is a disruption of service. But anyway, as silly as this story sounds, the guy (a staff member) more or less did the right thing: wasn’t sure if it was real, but treated it as if it was. (And the beer thing is a good joke until you remember that beer is actually a valid post-nuclear water source!)

“I woke up too late, and I only saw the retraction.”

I liked this one, only because it highlights that an early-morning alert is only going to reach so many people.

“I was sitting in my kitchen, and I had finished a cup of coffee. I thought, ‘I should not have more coffee.’ But then I saw the alert, and I thought, ‘I can have one more cup of coffee.’ So I sat and drank my coffee. I thought it was real! But I am 70. I was OK with it. But my relatives on the mainland called me, to say goodbye. They were crying. But I was OK. Of course I believed it was real — it was on the TV!”

This was my Japanese-American (emigrated here in the 1970s from Tokyo) taxi driver who took me to the airport. I don’t have anything clever to say about his story, but I loved it so much. One more cup of coffee, if that’s what it’s going to be.

The other extremely useful thing about being out here was talking to local journalists. It’s easy to dismiss local journalism — a lot of it is pretty bad, and the consolidation of news sources has made a lot of it less “local” than it used to be. But the ones I met here knew a hell of a lot more about this story than most of the national news sources I read. Eliza Larson, of KITV, was part of the conference the entire time, and her knowledge and perspective were crucial. We also visited the office of Honolulu Civil Beat, and they were also great. (And one of them, not knowing my relation to it, described the NUKEMAP as an “authoritative tool” that they found very useful, which of course I delighted in.)

The alert interface used by HI-EMA. It's terrible.

The alert system used by HI-EMA, per Honolulu Civil Beat. It’s a bad interface, no matter how you slice it. The first option, “BMD False Alarm,” was added after the false alarm incident.

One thing that emerged for me is that the narrative of “what went wrong” is still not quite known. The first draft of the story, which most people believe, is that an employee clicked the wrong button on the alert website. This is absurd enough to be believable, and the “lesson” of it is clear: user interfaces matter, a conclusion that resonated very strongly with the “human factors engineering” analysis that became very popular following its application in the post-mortem of the Three Mile Island accident.

But that turns out not to be what happened, as emerged later. Two different versions have been put out. The “button pusher,” we’ll call him, later told journalists that he, in fact, had not done it accidentally, but that he had been told it was real and he believed it was real. Which turns it into a very different sort story: one about miscommunication, human error, and a system problem that makes it very easy for an alert to be sent (by a single person), but not to be rescinded.

The other later version, put out by HI-EMA officials, is that the aforementioned “button pusher” was in fact an unreliable, unstable person who had displayed personality problems in the past and totally “shut down” after sending the alert. The “button pusher” disputes this version of events quite vigorously, we were told, and no documentation has been provided by HI-EMA to substantiate this account. If this version is true, the story is about human reliability, along with the aforementioned system problem. (The “button pusher” was fired from HI-EMA shortly after, and was the target of considerable ire by an understandably furious public.)

Both HI-EMA and the “button pusher” have self-interested reasons for preferring their versions of the story, as it shifts the blame considerably. Either way, the system failures remain: a single individual, whether by confusion or by malice, should not be able to send out a false alarm by themselves to thousands of people.3

The HI-EMA official emblem: a Civil Defense (CD) logo rising out of an erupting volcano, while a massive wave menaces from the right, and wind blows trees on the left, the entire thing ringed in shark teeth. Seriously.

The Hawaii Emergency Management Agency (HI-EMA) emblem. Tell me this isn’t the most amazing piece of graphic design ever. Civil Defense! Volcano! Tsunami! Hurricane! SHARK TEETH!

The grim irony is that Hawaii was being extremely proactive when it came to the possibility of a ballistic missile threat. They’re not wrong to think that it should be in their conception of the possible risks against them. They appear to have been one of the only statewide emergency management agencies that had worked to reintroduce nuclear weapons threats into their standard alert procedures and drills.4 They set up a system that, by any measure, contained terrible flaws, ones that any outside analyst could have seen.

And we were told that a consequence of this false alarm, aside from the panic, fear, and confusion (of such magnitude that may have caused at least one heart attack), we were told, is that Hawaii seems to have put its ballistic missile alert system on an indefinite hold. Which is understandable, but unfortunate. Because the nuclear threat, including the ballistic missile threat, is still a real one. It will continue to be a real one as long as there are hostile states with nuclear-tipped ballistic missiles — which seems like it might be a very long time indeed. HI-EMA was right, I think, in making ballistic missile threats part of their “threat matrix” of possibilities that they, as the organization tasked with preserving the lives of their citizens, were tasked with addressing. But they also had a responsibility to set up a system in which false positives would be very unlikely, and they utterly failed at that. The consequence is that not only are the residents of Hawaii less prepared than they had previously been for the possibility of a nuclear attack (which if you think is so remote a risk, read Jeffrey Lewis’s novel, the The 2020 Report, and get back to me), but other state governments are probably going to continue to be shy about taking nuclear risks seriously, for fear of the terrible publicity that comes with getting it wrong.

Dispatched from a mai tai bar at the Honolulu airport, waiting for a red-eye flight. Please chalk up any typos to the mai tai. Expect blog posts somewhat more regularly in 2019. 

  1. Though the flight was pretty good, to be honest. The agency that had set up my ticket had booked me in practically the last seat on the plane, and I was not looking forward to that. But for whatever reason, when I went to check in and get my boarding pass the night before, I was offered a chance to upgrade to first class for only $299. Which is pretty amazing in any circumstances, but for an eleven-hour flight it felt foolish to pass it up. And so I didn’t, and had a very nice flight. The real perk of first class not the better food and nicer flight attendants — though those were nice — but was the fact that my seat turned into a totally flat bed. That was pretty amazing, and my first time being able to experience that on a plane. It makes a huge difference. []
  2. Among its many materials, the exhibit prominently featured a racist editorial cartoon by Theodor Geisel — Dr. Seuss — that ridiculed the notion of a Japanese attack. UCSD has a nice collection of his wartime cartoons, and I was particularly struck that he published many on the theme of how unlikely an attack was, with the last published only two days before the surprise attack. []
  3. As an aside, I have not been able to figure out what order of magnitude of people receive the alert. “Thousands” is conservative. People on islands 100 miles away from Oahu got the alert as well. A Congressional Research Service backgrounder on the incident says that HI-EMA attempted to stop transmission within minutes, so it may not have reached the full intended audience. Some people said that their phone got the alert, but their spouses’ phone did not. “Tens of thousands” is probably conservative. “Hundreds of thousands,” upward to a million, might be a possibility, but I don’t know. []
  4. I took a tour of the Massachusetts Emergency Management Agency not long after the alert, and was told by the head of the organization that they did not have any plans whatsoever in place for a nuclear weapon detonation, much less a ballistic missile attack. This was kind of amazing to hear, especially since MEMA is located in an underground bunker constructed in the 1960s to survive a nuclear attack. []
Redactions

Cleansing thermonuclear fire

by Alex Wellerstein, published June 29th, 2018

What would it take to turn the world into one big fusion reaction, wiping it clean of life and turning it into a barren rock? Asking for a friend.

Graphic from the 1946 film, “One World Or None,” created by the National Committee on Atomic Information, advocating for the importance of the international control of atomic energy.

One might wonder whether that kind of question presented itself while I was reading the news these days, and one would be entirely correct. But the reason people typically ask this question is in reference to the story that scientists at Los Alamos thought there was a non-zero chance that the Trinity test might ignite the atmosphere during the first wartime test.

The basic idea is a simple one: if you heat up very light atoms (like hydrogen) to very high temperatures, they’ll race around like mad, and the chances that they’ll collide into each other and undergo nuclear fusion become much greater. If that happens, they’ll release more energy. What if the first burst of an atomic bomb started fusion reactions in the air around it, say between the atoms of oxygen or nitrogen, and those fusion reactions generated enough energy to start more reactions, and so on, across the entire atmosphere?

It’s hard to say how seriously this was taken. It is clear that at one point, Arthur Compton worried about it, and that just the same, several scientists came up with persuasive reasoning to the effect that this could not happen. James Conant, upon feeling the searing heat of the Trinity test, briefly reflected that maybe this rumored thing had, indeed, come to pass:

Then came a burst of white light that seemed to fill the sky and seemed to last for seconds. I had expected a relatively quick and bright flash. The enormity of the light and its length quite stunned me. My instantaneous reaction was that something had gone wrong and that the thermal nuclear [sic] transformational of the atmosphere, once discussed as a possibility and jokingly referred to a few minutes earlier, had actually occurred.

Which does at least tell us that some of those at the test were still joking about it, even up to the last few minutes. Fermi reportedly took bets on whether the bomb would destroy just New Mexico or in fact the entire world, but it was understood as a joke.1

The introduction of the Konopinski, Marvin, and Teller paper of 1946. Filed under: “SCIENCE!

In the fall of 1946, Emil Konopinski, Cloyd Marvin, and Edward Teller (who else?) wrote up a paper explaining why no detonation on Earth was likely to start an uncontrolled fusion reaction in the atmosphere. It is not clear to me whether this is exactly the logic they used prior to the Trinity detonation, but it is probably of a similar character to it.2 In short, there is only one fusion reaction based on the constituents of the oxygen that had any probability at all (the nitrogen-nitrogen reaction), and the scientists were able to show that it was not very likely to happen or spread. Even if one makes assumptions that the reaction was much easier to initiate than anyone thought it was likely to be, it wasn’t going to be sustained. The reaction would cool (through a variety of physical mechanisms) faster than it would spread.

This is all a common part of Manhattan Project lore. But I suspect most who have read of this before have not actually read the Konopinski-Marvin-Teller paper to its end, where they end on a less sure-of-themselves note:

There remains the distant possibility that some other less simple mode of burning may maintain itself in the atmosphere.

Even if the reaction is stopped within a sphere of a few hundred meters radius, the resultant earth-shock and the radioactive contamination of the atmosphere might become catastrophic on a world-wide scale.

One may conclude that the arguments of this paper make it unreasonable to expect that the N+N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable.

That’s not quite as secure as one might desire, considering these scientists were in fact working on developing weapons many thousands of times more powerful than the Trinity device.3

The relevant section of the Manhattan District History (cited below) interestingly links the research into the “Super” hydrogen bomb with the research into whether the atmosphere might be incinerated, which makes sense, though it would be interesting to know how closely linked these questions where.

There is an interesting section in the recently-declassified Manhattan District History‘s that discusses the ignition of the atmosphere problem. They repeat essentially the Konopinski-Marvin-Teller results, and then conclude:

The impossibility of igniting the atmosphere was thus assured by science and common sense. The essential factors in these calculations, the Coulomb forces of the nucleus, are among the best understood phenomena of modern physics. The philosophic possibility of destroying the earth, associated with the theoretical convertibility of mass into energy, remains. The thermonuclear reaction, which is the only method now known by which such a catastrophe could occur, is evidently ruled out. The general stability of matter in the observable universe argues against it. Further knowledge of the nature of the great stellar explosions, novae and supernovae, will throw light on these questions. In the almost complete absence of real knowledge, it is generally believed that the tremendous energy of these explosions is of gravitational rather than nuclear origin.4

Which again is simultaneously reassuring and not reassuring. The footing on which this knowledge was based was… pretty good? But like good scientists they were happy, at least in secret reports, to acknowledge that there might in fact be ways for the planet to be destroyed through nuclear testing that they hadn’t considered. Intellectually honest, but also terrifying.

The ever relevant XKCD.

This issue came up again prior to the Operation Crossroads nuclear tests in early 1946, which was to include at least one underwater shot. None other than Nobel Prize-winning physicist Percy Bridgman worried that detonating an atomic bomb under water might ignite a fusion reaction in the water. Bridgman admitted his own ignorance into nuclear physics (his area of expertise was high-pressure physics), but warned that:

Even the best human intellect has not imagination enough to envisage what might happen when we push far into new territory. … To an outsider the tactics of the argument which would justify running even the slightest risk of such a colossal catastrophe appears exceedingly weak.5

Bridgman’s fears weren’t really that the world would be destroyed. He worried more that if the scientists appeared to be cavalier about these things, and it was later made public that their argument for the safety of the tests was based on flimsy evidence, that it would lead to a strong public backlash: “There might be a reaction against science in general which would result in suppression of all scientific freedom and the destruction of science itself.” Bridgman’s views were strong enough that they were forwarded to General Groves, but it isn’t clear whether they resulted in any significant changes (though I wonder if they were the impetus for the write-up of the Konopinski-Marvin-Teller paper; the timing kind of works out, but I don’t know).

There isn’t a lot of evidence that this problem concerned the scientists too much going forward. They had other things on their mind, like building thermonuclear weapons, and it quickly became clear that starting a large fusion reaction with a fission bomb is hard. Which is, in its own way, an answer to the original question: if starting a runaway fusion reaction on purpose is difficult, and requires very specific kinds of arrangements and considerations to get working even on a (relatively) small scale, then starting one in the entire atmosphere, is likely to be impossible.

Operation Fishbowl, Shot Checkmate (1962) — a low yield weapon, but something about its perfect symmetry and the trail of the rocket that put it into the air invokes the idea of a planet turning into a star for me. Source: Los Alamos National Laboratory.

Great — cross that one off the list of possibilities. But it wouldn’t really be science unless they also, eventually, re-framed the question: what conditions would be required if we were to try and turn the entire planet into a thermonuclear bomb? In 1975, a radiation physicist at the University of Chicago, H.C. Dudley, published an article in the Bulletin of the Atomic Scientists warning of the “ultimate catastrophe” of setting the atmosphere on fire. This received several rebuttals and a lot of scorn, including one in the pages of the Bulletin by Hans Bethe, who had previously addressed this question in the Bulletin in 1946. Interestingly, though, Dudley’s main desire — that someone re-run these calculations on a modern computer simulation — did seem to generate a study along these lines at the Lawrence Livermore National Laboratory.6

In 1979, Livermore scientists Thomas A. Weaver and Lowell Wood (the latter appropriately a well-known Edward Teller protege) published a paper on “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” which is a jargony way to ask the question in the title of this blog post. Here’s the abstract:

The basic conditions for the initiation of a nuclear-detonation wave in an atmosphere having plane symmetry (e.g., a thin, layered fluid envelope on a planet or star) are developed. Two classes of such a detonation are identified: those in which the temperature of the plasma is comparable to that of the electromagnetic radiation permeating it, and those in which the temperature of the plasma is much higher. Necessary conditions are developed for the propagation of such detonation waves for an arbitrarily great distance. The contribution of fusion chain reactions to these processes is evaluated. By means of these considerations, it is shown that neither the atmosphere nor oceans of the Earth may be made to undergo propagating nuclear detonation under any circumstances.7

Now if you just read the abstract, you might think it was just another version (with fancier calculations) of the Konopinski-Marvin-Teller paper. And they do rule out conclusively that N+N reactions would ever be energetic enough to be self-propagating. But it is far more! Because unlike Konopinski-Marvin-Teller, it actually focuses on those “necessary conditions”: what would need to be different, if you did want to have a self-propagating reaction?

The answer they found: if the Earth’s oceans had twenty times more deuterium than they actually contain, they could be ignited by a 20 million megaton bomb (which is to say, a bomb with the yield equivalent to 200 teratons of TNT, or a bomb 2 million times more powerful than the Tsar Bomba’s full yield). If we assumed that such a weapon had even a fantastically efficient yield-to-weight ratio like 50 kt/kg, that’s still a device that would weigh around a billion metric tons. To put that into perspective, that’s about ten times more mass than all of the concrete of the Three Gorges Dam.8

So there you have it — it can be done! You just need to totally change the composition of the oceans and need a nuclear weapon many orders of magnitude more powerful than the gigaton bombs dreamed of by Edward Teller, and then, maybe, you can pull off the cleansing thermonuclear fire experience.

Which is to say, this won’t be how our planet dies. But don’t worry, there are plenty other plausible alternatives for human self-extinction out there. They just probably won’t be as quick.


I am in the process of finishing my book manuscript, which is the real job of this summer, so most other writing, including blogging, is taking a back seat for a few months while I focus on that. The irreverent title of this post is taken from a recurring theme in the Twitter feed of anthropology grad student Martin “Lick the Bomb” Pfeiffer, whose work you should check out if you haven’t already.

  1. This undergraduate paper by Stanford student Dongwoo Chung, “(The Impossibility of) Lighting Atmospheric Fire,” does a really nice job of reviewing some of the wartime discussions and the scientific issues. []
  2. Emil Konopinski, Cloyd Marvin, and Edward Teller, “Ignition of the Atmosphere with Nuclear Bombs,” (14 August 1946), LA-602, Los Alamos National Laboratory. Konopinski and Teller also apparently wrote an unpublished report on the subject in 1943. I have only seen reference to it, as report LA-001 (suspiciously similar to the LA-1 that is the Los Alamos Primer), but have not seen it. []
  3. Teller, in October 1945, wrote the following to Enrico Fermi about the possibility of a “Super” detonating the atmosphere, as part of what was essentially a “Frequently Asked Questions” about the H-bomb: “Careful considerations and calculations have shown that there is not the remotest possibility of such an event [ignition of the atmosphere]. The concentration of energy encountered in the super bomb is not greater than that of the atomic bomb. In my opinion the risks were greater when the first atomic bomb was tested, because our conclusions were based at that time on longer extrapolations from known facts. The danger of the super bomb does not lie in physical nature but in human behavior.” What I find most interesting about this is his comment about Trinity, though Teller’s rhetorical point is an obvious one (overstate the Trinity uncertainty after the fact in order to emphasize his certainty at the present). Edward Teller to Enrico Fermi (31 October 1945), Harrison-Bundy Files Relating to the Development of the Atomic Bomb, 1942-1946, microfilm publication M1108 (Washington, D.C.: National Archives and Records Administration, 1980), Roll 6, Target 5, Folder 76, “Interim Committee — Scientific Panel.” []
  4. Manhattan District History, Book 8, Volume 2 (“Los Alamos – Technical”), paragraph 1.50. []
  5. Percy W. Bridgman to Hans Bethe, forwarded by Norris Bradbury to Leslie Groves via TWX (13 March 1946), copy in the Nuclear Testing Archive, Las Vegas, NV, document NV0128609. []
  6. H.C. Dudley, “The Ultimate Catastrophe,” Bulletin of the Atomic Scientists (November 1975), 21; Hans Bethe, “Can Air or Water Be Exploded?,” Bulletin of the Atomic Scientists 1, no. 7 (15 March 1946), 2; Hans Bethe, “Ultimate Catastrophe?,” Bulletin of the Atomic Scientists 32, no. 6 (1976), 36-37; Frank von Hippel, “Taxes Credulity (Letter to the Editor),” Bulletin of the Atomic Scientists (January 1946), 2. []
  7. Thomas A. Weaver and Lowell Wood, “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” Physical Review A 20, no. 1 (1 July 1979), 316-328. DOI: https://doi.org/10.1103/PhysRevA.20.316. []
  8. Specifically, they conclude it would take a 2 x 107 Mt energy release, which they call “fantastic,” to ignite an ocean of 1:300 (instead of the actual 1:6,000) concentration of deuterium. As an aside, however, the collision event that created the Chicxulub Crater (and killed the dinosaurs, etc.) is estimated to have released around 5 x 1023 J, which translates into about 120 million megatons of TNT. So that’s not a totally unreasonable energy release for a planet to encounter over the course of its existence — just not from nuclear weapons. []