News and Notes | Visions

Why NUKEMAP isn’t on Google Maps anymore

by Alex Wellerstein, published December 13th, 2019

When I created the NUKEMAP in 2012, the Google Maps API was amazing.1 It was the best thing in town for creating Javascript mapping mash-ups, cost literally nothing, had an active developer community that added new features on a regular basis, and actually seemed like it was interested in people using their product to develop cool, useful tools.

At left, the original NUKEMAP from 2005; at right, the Google Maps NUKEMAP from 2012.

NUKEMAPs of days gone by: On the left is the original NUKEMAP I made way back in March 2005, which used MapQuest screenshots (and was extremely limited, and never made public) and was done entirely in PHP. I made it for my own personal use and for teaching. At right, the remade original NUKEMAP from 2012, which used the Google Maps API/Javascript.

Today, pretty much all of that is now untrue. The API codebase has stagnated in terms of actually useful features being added (many neat features have been removed or quietly deprecated; the new features being added are generally incremental and lame), which is really quite remarkable given that the Google Maps stand-alone website (the one you visit when you go to Google Maps to look up a map or location) has had a lot of neat features added to it (like its 3-D mode) that have not been ported to the API code (which is why NUKEMAP3D is effectively dead — Google deprecated the Google Earth Plugin and has never replaced it, and no other code base has filled the gap).2

But more importantly, the changes to the pricing model that have been recently put in place are, to put it lightly, insane, and punishing if you are an educational web developer that builds anything that people actually find useful.

NUKEMAP gets around 15,000 hits a day on a slow day, and around 200,000 hits a day per month, and has done this consistently for over 5 years (and it occasionally has spikes of several hundred thousand page views per day, when it goes viral for whatever reason). While that’s pretty impressive for an academic’s website, it’s what I would call “moderately popular” by Internet terms. I don’t think this puts the slightest strain on Google’s servers (who also run, like, all of YouTube). And from 2012 through 2016, Google didn’t charge a thing for this. Which was pretty generous, and perhaps unsustainable. But it encouraged a lot of experimentation, and something like NUKEMAP wouldn’t exist without that. 

In 2016, they started charging. It wasn’t too bad — at most, my bill was around $200 a month. Even that is pretty hard to do out-of-pocket, but I’ve had the good fortune to be associated with an institution (my employers, the College of Arts and Letters at the Stevens Institute of Technology) that was willing to foot the bill. 

But in 2018, Google changes its pricing model, and my bill jumped to more like $1,800 per month. As in, over $20,000 a year. Which is several times my main hosting fees (for all of my websites).

I reached out to Google to find out why this was. Their new pricing sheet is… a little hard to make sense of. Which is sort of why I didn’t see this coming. They do have a “pricing calculator,” though, that lets you see exactly how terrible the pricing scheme is, though it is a little tricky to find and requires having a Google account to access. But if you start playing with the “dynamic map loads” button (there are other charges, but that’s the big one) you can see how expensive it gets, quickly. I contacted Google for help in figuring all this out, and they fobbed me off onto a non-Google “valued partner” who was licensed to deal with corporations on volume pricing. Hard pass, sorry.

Google for Nonprofit's eligibility standards

Google for Nonprofit’s eligibility standards — academics need not apply.

I know that Google in theory supports people using their products for “social causes,” and if one is at a non-profit (as I am), you can apply for a “grant” to defray the costs, assuming Google assume’s you’re doing good. I don’t know how they feel about the NUKEMAP, but in any case, it doesn’t matter: people at educational institutions (even not-for-profit ones, like mine) are disqualified from applying. Why? Because Google wants to capture the educational market in a revenue-generating way, and so directs you to their Google for Education site, which as you will quickly find is based on a very different sort of model. There’s no e-mail contact on the site, as an aside: you have to claim you are representing an entire educational institution (I am not) and that you are interested in implementing Google’s products on your campus (I am not), and if you do all this (as I did, just to get through to them) you can finally talk to them a bit.

There is literally nothing on the website that suggests there is any way to get Google Maps API credit, but they do have a way to request discounted access to the Google Cloud Platform, which appears to be some kind of machine-learning platform, and after sending an e-mail they did say that you could apply for Google Cloud Platform funds to be used for Google Maps API.

By which point I had already, in my heart, given up on Google. It’s just not worth it. Let me outline the reasons:

  • They clearly don’t care about small developers. That much is pretty obvious if you’ve tried to develop with their products. Look, I get that licensing to big corporations is the money-maker. But Google pretends to be developing for more than just them… they just don’t follow through on those hopes.
  • They can’t distinguish between universities as entities, and academics as university researchers. There’s a big difference there, in terms of scale, goals, and resources. I don’t make university IT policy, I do research. 
  • They are fickle. It’s not just the fact that they change their pricing schemes rapidly, it’s not just that they deprecate products willy-nilly. It’s that they push out new products, encourage communities to use them to make “amazing” things, and then don’t support them well over the long term. They let cool projects atrophy and die. Sometimes they sell them off to other companies (e.g., SketchUp), who then totally change them and the business model. Again, I get it: Google’s approach is throwing things at the wall, hoping they stick, and believes in disruption more than infrastructure, etc. etc. etc. But that makes it pretty hard to justify putting all of your eggs in their basket.
  • I don’t want to worry about whether Google will think my work is a “social good,” I don’t want to worry about re-applying every year, I don’t want to worry about the branch of Google that helps me out might vanish tomorrow, and so on. Too much uncertainty. Do you know how hard it is to get in contact with a real human being at Google? I’m not saying they’re impossible — they did help me waive some of the fees that came from me not understanding the pricing policy — but that took literally months to work out, and in the meantime they sent a collection agency after me. 

But most of all: today there are perfectly viable alternatives. Which is why I don’t understand their pricing model change, except in terms of, “they’ve decided to abandon small developers completely.” After a little scouting around, I decided that MapBox completely fit the bill (and whose rates are more like what Google used to charge), and that Leaflet, an open-source Javascript library, could make for a very easy conversion. It took a little work to make the conversion, because Leaflet out of the box doesn’t support the drawing of great circles, but I wrote a plugin that does it.

Screenshot of NUKEMAP 2.65

NUKEMAP as of this moment (version 2.65; I make small incremental changes regularly), with its Mapbox GL + Leaflet codebase. Note that a while back I started showing the 1 psi blast radius as well, because I decided that omitting it caused people to underestimate the area that would be plausibly affected by a nuclear detonation.

Now, even MapBox’s pricing scheme can add up for my level of map loads, but they’ve been extremely generous in terms of giving me “credits” because they support this kind of work. And getting that worked out was a matter of sending an e-mail and then talking to a real person on the phone. And said real person has been extremely helpful, easy to contact, and even reaches out to me at times when they’re rolling out a new code feature (like Mapbox GL) that he thinks will make the site work better and cheaper. Which is to say: in every way, the opposite of Google. 

So NUKEMAP and MISSILEMAP have been converted entirely over to MapBox+Leaflet. The one function that wasn’t easy to port over was the “Humanitarian consequences” (which relies on Google’s Places library), but I’ll eventually figure out a way to integrate that into it.

More broadly, the question I have to ask as an educator is: would I encourage a student to develop in the Google Maps API if they were thinking about trying to make a “break-out” website? Easy answer: no way. With Google, becoming popular (even just “moderately popular”) is a losing proposition: you will find yourself owing them a lot of money. So I won’t be teaching Google Maps in my data visualization course anymore — we’ll be using Leaflet from now on. I apologize for venting, but I figured that even non-developers might be interested in knowing on how these things work “under the hood” and what kinds of considerations go into the choice of making a website these days.

Demonstration of two cases of the NUKEMAP fallout dose exposure tool

A simple example of the kind of thing you can do with NUKEMAP’s new fallout dose exposure tool. At top, me standing out my office (for an entire 24 hours) in the wake of a 20 kt detonation in downtown NYC using the weather conditions that exist as I am posting this: I am very very dead. At bottom, I instead run quickly into the basement bowling alley in the Stevens University Howe Center (my preferred shelter location, because it’s fairly deep inside a multi-story rocky hill, on top of which is a 13 story building), and the same length of time gives me, at most, a slight up-tick in long-term cancer risk.

More positively, I’m excited to announce that a little while back, I added a new feature to NUKEMAP, one I’ve been wanting to implement for some time now. The NUKEMAP’s fallout model (the Miller model) has always been a little hard to make intuitive sense out of, other than “a vague representation of the area of contamination.” I’ve been exploring some other fallout models that could be implemented as well, but in the meantime, I wanted to find a way to make the current version (which has to advantage of being very quick to calculate and render) more intuitively meaningful.

The Miller model’s contours give the dose intensity (in rad/hr) at H+1 hour. So for the “100 rad/hr” contour, that means: “this area will be covered by fallout that, one hour after detonation, had an intensity of 100 rad/hr, assuming that the fallout has actually arrived there at that time.” So to figure out what your exposure on the ground is, you need to calculate when the fallout actually arrives to you (on the wind), what the dose rate is at time of arrival, and then how that dose rate will decrease over the next hours that you are exposed to it. You also might want to know how that is affected by the kind of structure you’re in, since anything that stands between you and the fallout will cut your exposure a bit. All of which makes for an annoying and tricky calculation to do by hand.

So I’ve added a feature to the “Probe location” tool, which allows you to sample the conditions at any given distance from ground zero. It will now calculate the time of fallout arrival (which is based on the distance and the wind settings), the intensity of the fallout at the time of arrival, and then allow you to see what the total dose would be if you were in that area for, say, 24 hours after detonation. It also allows you to apply a “protection factor” based on the kind of building you are in (the protection factor is just a divisor: a protection factor of 10 reduces the total exposure by 10). All of which can be used to answer questions about the human effects of fallout, and the situations in which different kinds of shelters can be effective, or not.3

There are some more NUKEMAP features actively in the works, as well. More on those, soon.

  1. For non-coders: an API is a code library that lets third-party developers use someone else’s services. So the Google Maps API lets you develop applications that use Google Maps: you can say, “load Google Maps into this part of the web page, add an icon to it, make the icon draggable, and when someone clicks a button over here, draw circles around that icon that go out to a given radius, and color the circles this way and that way.” That’s the basics of NUKEMAP’s functionality, more or less. []
  2. Before people e-mail me about how CesiumJS fills the Google Earth Plugin gap — it doesn’t, because it doesn’t give you the global coverage of 3D buildings that you need to make sense of the size of a mushroom cloud. If they change that someday, I’ll take the time to port the code, but I don’t see many signs that this is going to happen, because global 3D building shapes are still something that only Google seems to own. If you do want to render volumetric mushroom clouds in the stand-alone Google Earth program, there is a (still experimental and incomplete) feature in NUKEMAP for exporting cloud shapes as KMZ files. See the NUKEMAP3D page for more information on how to use this. []
  3. I’ll eventually update the NUKEMAP FAQ about how this works, but it just uses Wigner’s standard t-1.2 fission product decay rate formula. []
Redactions

John Wheeler’s H-bomb Blues

by Alex Wellerstein, published December 3rd, 2019

It’s been forever since I’ve updated on here, and I wanted to let you know that not only have I not abandoned this blog project, I’m planning to do a lot more blogging in 2020. I ended up taking a mostly-hiatus from blogging this year both because I got totally overloaded with work in the spring (too much teaching, too much service, hosting a big workshop and expo for the Reinventing Civil Defense project, etc.), and because I needed to get my book totally finished and out the door. But that’s all done now, so I’m looking forward to getting back into things. Lots of exciting things will be happening in 2020, including the publication of my big article on Truman and the Kyoto decision, the publication of my book (Restricted Data: Nuclear Secrecy in the United States), and some other things I can only mysteriously hint at for the moment!

In the meantime, I wanted to announce that an article I’ve been working on for a long time has finally appeared in print: “John Wheeler’s H-bomb Blues,” in the December 2019 issue of Physics Today. It’s a Cold War mystery about an eminent scientist, a secret conspiracy, and six pages of H-bomb secrets that went missing on an overnight train from Philadelphia to Washington, DC. 

Cover page of John Wheeler's H-bomb Blues
There may never be a good time to lose a secret, but some secrets are worse than others to lose, and some times are worse than others to lose them. For US physicist John Archibald Wheeler, January 1953 may have been the absolute worst time to lose the particular secret he lost. The nation was in a fever pitch about Communists, atomic spies, McCarthyism, the House Un-American Activities Committee, Julius and Ethel Rosenberg, and the Korean War. And what Wheeler lost, under the most suspicious and improbable circumstances, was nothing less than the secret of the hydrogen bomb, a weapon of unimaginable power that had first been tested only a month before.

You can read the article for more. If you’re interested as to why I wrote it, see this short blog post I wrote for Physics Today as well, which talks about how one gets ahold of Cold War FBI files of famous physicists (it helps if they are dead). 

Aside from its mysterious vibe (and lurid details, like Wheeler peering at a stranger on a toilet), one of the wonky things I like about this paper is that for me it serves as sort of an approach to the old historiographical question of, “do individual details matter in history, or does the larger context matter?” You can see variations of this debate all over the place, such as the “Great Men vs. Cultural History” takes. 

Two high-contrast photographs of John A. Wheeler

The photographs of Wheeler from his FBI file. Everybody looks pretty bad when they’ve been microfilmed.

For me the answer has always been both — history is both the idiosyncratic details (and individuals), but it is also about the broader shifts. And for me the really interesting moments are when those two levels of scale interact. This story, in which a single person loses a tiny amount of paper, and huge consequences result, is an example of what I mean  by that “interaction.” Wheeler losing the documents does matter in a broad sense (spoiler: it sets off the Oppenheimer security hearings, which have long-standing ramifications for Cold War science), but it only matters because there is a context in which it can matter (a national security state and international situation that imbues six pages of writing with catastrophic meaning). If those six pages of H-bomb secrets were magically transmitted to almost any other historical context, they’d have been meaningless. 

But enough on that. What I also wanted to share with you here are some interesting documents, things referenced in the article but not easily available anywhere else. 

Document from Wheeler's FBI file

The initial memo about the “Wheeler Incident,” January 7, 1953.

First, here is a short (24-page) excerpt from Wheeler’s FBI file, which I received from the FBI in response to a Freedom of Information Act request. I’ve helpfully arranged it chronologically — full FBI files are a terrible, jumbled, repetitive mess that required a lot of careful reading and processing to make sense of. I’ve chosen documents that both illustrate the character and tenor of the “Wheeler incident” investigation, and give some hints as to how important it was seen at the time. The spindly handwriting on some of the pages (e.g., 11) is J. Edgar Hoover’s. (How did the man manage to be creepy in nearly every possible way? Even his handwriting is creepy.) I’ll be releasing the full FBI file sometime in 2020, along with other files in my collection of Cold War physicist FBI files, as part of a collaboration with the Center for History of Physics/Niels Bohr Library and Archives at the American Institute of Physics.1

Image of Wheeler's statement to the FBI

Wheeler’s classified deposition to the FBI about the “Wheeler incident,” March 3, 1953.

Next, we have Wheeler’s March 3, 1953, deposition to the FBI. I’ve collated two versions of this that I have. Both are from the National Nuclear Security Administration, but they were redacted at different times and with slightly different priorities. The color one redacts more weapons information, the black and white one redacts information about FBI agents. If you put them together you get almost the entire story — some details about the lost document are redacted, but the big picture is there. As I’ve written about before, I sort of love the game of getting multiple, differently-declassified copies of documents and comparing them, not only because one feels like one is learning something forbidden, but also because it lets you get inside the mind of the censor a bit — you can see how different concerns lead to different removals.2

Letter of displeasure from Dean to Wheeler, 1953

Dean’s letter of displeasure to Wheeler, April 1, 1953.

Next, I offer up Gordon Dean’s reprimand letter to Wheeler (April 1, 1953), telling him that he had been a very bad boy, but that they were going to look the other way. Considering the penalties for mishandling nuclear secrets include prison time and huge fines, an unhappy letter was pretty much a slap on the wrist. But as Dean told the Joint Committee on Atomic Energy, “We do not see anything we can do above that at the moment. We still want him in the program. He is a very valuable man, and we do not know anything else we can do without cutting off our nose to spite our faces.”3

Cover page of Walker's H-bomb history, 1953

The cover page of Walker’s “Policy and Progress in the H-bomb Program,” January 1, 1953.

Finally — and this is perhaps the real prize here — I offer up a complete (redacted) copy of John Walker’s “Policy and Progress in the H-bomb Program: A Chronology of Leading Events” (January 1, 1953), the conspiratorial H-bomb history that Wheeler was consulting for, and lost several pages from. This was the bureaucratic weapon that was meant to show that Oppenheimer et al. had slowed down the H-bomb program, and is more or less a chronology of work and thought on the H-bomb, with extensive quotations from documents and reports. It is at places heavily redacted. But it’s still pretty interesting on the whole. Reader beware: this work of “history” is biased inasmuch as it does not present full context of the documents and opinions it quotes. As a result is heavily favors Teller and Wheeler’s views of things. Teller in particular wrote many overly-optimistic memos about how easy it would be to make an H-bomb, which nobody but Teller agreed with. But because his memos dominate the record on this, when you put them all in a list, without discussing their problems and shortcomings, it can look like a very strong case. For that context, Gregg Herken’s Brotherhood of the Bomb and Richard Rhodes’ Dark Sun are much more recommended! But this still is a useful document.4

OK, that’s all for now. I may have one more post in 2019 (about why NUKEMAP switched from Google Maps to Mapbox), but otherwise I will see you in 2020!

  1. Document source: Federal Bureau of Investigation, Freedom of Information Act request. []
  2. Document source: National Nuclear Security Administration, Freedom of Information Act requests and FOIA Reading Room. []
  3. Document source: Papers of Gordon Dean, Records of the US Atomic Energy Commission (RG 326), Box 2, “Classified Reader File, 1953.” []
  4. Document source: Records of the Joint Committee on Atomic Energy (RG 128), Series 2, Box 60, Legislative Archives, National Archives and Records Administration, Washington, DC. []
Meditations

Notes on the Hawaii false alarm, one year later

by Alex Wellerstein, published January 13th, 2019

Today is the one-year anniversary of the Hawaii false alarm, in which the Hawaii Emergency Management Agency (HI-EMA) sent out a text alert to thousands of Americans in the Hawaiian islands that told them that a ballistic missile was incoming, that they should take shelter, and that “THIS IS NOT A DRILL.”

I’ve spent the last few days in Hawaii, as part of a workshop hosted by Atomic Reporters and the Stanley Foundation, and sponsored by the Carnegie Corporation of New York, that brought together a few experts (I was one of those) with a large number of journalists (“old” and “new” media alike) to talk about the false alarm and its lessons.

Photo of the author on the beach, wearing inappropriate attire.

You’re looking at the photo documenting the only beach time I got while in Hawaii. Seriously. Thanks to Andrew Futter for taking this picture. You should check out his book, Hacking the Bomb: Cyber Threats and Nuclear Weapons (Georgetown University Press, 2018). I enjoyed getting to spend a few days hanging out with him.

Given that it is supposed to be snowing back home, you’d think a trip to Hawaii would come as a very welcome thing for me, but almost all of my time was spent in windowless rooms, and an eleven-hour flight is no picnic.1 So I spent some time wondering, “why have this workshop here?” I mean, obviously the location is relevant, but practically, what would be different if we had held the same meeting in, say, Los Angeles, or Palo Alto?

Over the days, the answer became very clear. When you are in Hawaii, everyone has a story about their experience of the false alarm. And they’re all different, and they’re all fascinating. On “the mainland,” as they call us, we got only a very small sampling of experiences from those here in Hawaii, often either put together by people who were interested in being very publicly thoughtful about their feelings (like Cynthia Lazaroff, who we heard a talk from, who wrote up her experience for the Bulletin of the Atomic Scientists), or the kind-of-absurd responses that were used as examples for how ridiculous the whole thing was (e.g., the guy who was trying to put his kids down a manhole). Out here, though, every taxi or Lyft driver has their own experience, along with everyone else.

The sign from the base of a tsunami warning siren tower, which is labeled 'Civil Defense Warning Device' and uses an outdated, 1950s Civil Defense logo on it.

“Civil Defense Warning Device” sign, from the base of a tsunami warning siren tower, in Kapolei.

A few of the responses, broadly paraphrased by me (I didn’t record them, and this is not systematic), follow. It is worth remembering that these are coming a year later, and one would expect their memories to be significantly altered by the passage of time, the increased knowledge about what happened, and, of course, the knowledge that, while “NOT A DRILL,” it was a false alarm.

“I didn’t think it was real. I thought, if it was a real thing, they would also sound the sirens.”

This was from a Lyft driver, who looked like she was in her twenties. Hawaii has an extensive emergency management infrastructure for tsunamis. While I went for a long walk early one morning, I passed by one of the sirens for this, and noticed that they still use the same Civil Defense imagery from the 1950s on their equipment. The people of Hawaii know these sirens well, because they test them on the first day of every month. So the Lyft driver’s response was very interesting to me in this respect: she had expectations about what a “real” emergency would be like, and the ballistic missile alert didn’t meet them.

“I follow the North Korea situation closely, so I assumed it had to be false, because I am sure they wouldn’t just launch a missile like that, because it would be suicidal.”

This is a response that one journalist, and one policy analyst, both independently gave, almost word for word. In this example, people who felt they were connected to the broader context of the US-North Korea situation, and felt they understood North Korea’s strategic aims and options, reasoned that an actual attack from North Korea was unlikely, and thus discounted the alert.

The reasoning behind these “discounting” stories, I might suggest, is terrible. To assume an alert is false because it does not meet your expectations is completely silly unless you are much better informed about what a real alert would look like than our Lyft driver apparently was. Is the ballistic missile system and the tsunami system the same one? Would they use the tsunami alert for a missile alert? Could one system be active and the other sabotaged, malfunctioning, or otherwise not activated? These are big questions! In a real emergency it is not worth betting your life that things aren’t working the way you’d expect them to.

And the “context” justification for not believing it is hubris itself. We only see a portion of the total “context” at any one point. Who knows what has happened on the Korean peninsula several hours ago, but hasn’t made it to your ears? If you’re in Pacific Command (or can contact someone there), sure, you might know enough context to discount such an alert. Otherwise, it’s foolish to do so.

The fallacy of both of these reasons for discounting the alert, as an aside, was made very clear when I visited the Pearl Harbor Memorial. Conventional wisdom prior to Pearl Harbor was that Japan did not pose a major threat to the US, and would not dare to attack such a country.2  A “war scare” with Japan and the US had risen up and dissipated a few months before the actual attack, leading many to think the threat had passed, and even on the day of the attack, many soldiers and radar operators on Hawaii discounted what their own eyes saw because they thought it must be some kind of exercise, giving up any possibility of defense prior to the main attack.

One of the local journalists we talked to had more plausible means to discount the alert as false: he could contact a high-enough ranking member of the Hawaii government. That’s not a bad reason to discount it (though even then, would you bet your life on this official being “in the loop?”), and much of our discussions as a group centered around what the role of journalists ought to be in such a crisis situation, if they had information that was not yet released officially.

“I figured it was probably false, but I went into my bathtub anyway. If I were doing it again, I’d have brought a few beers to pass the time.”

I heard a few people say they understood the “take shelter” message to mean that they should get into their bathtub. I’m not sure where they got that — perhaps the television? I am not sure a bathtub is the best place to be; usually the emergency advice regarding bathtubs is to fill them up with water, so that you have several gallons of potable water in case there is a disruption of service. But anyway, as silly as this story sounds, the guy (a staff member) more or less did the right thing: wasn’t sure if it was real, but treated it as if it was. (And the beer thing is a good joke until you remember that beer is actually a valid post-nuclear water source!)

“I woke up too late, and I only saw the retraction.”

I liked this one, only because it highlights that an early-morning alert is only going to reach so many people.

“I was sitting in my kitchen, and I had finished a cup of coffee. I thought, ‘I should not have more coffee.’ But then I saw the alert, and I thought, ‘I can have one more cup of coffee.’ So I sat and drank my coffee. I thought it was real! But I am 70. I was OK with it. But my relatives on the mainland called me, to say goodbye. They were crying. But I was OK. Of course I believed it was real — it was on the TV!”

This was my Japanese-American (emigrated here in the 1970s from Tokyo) taxi driver who took me to the airport. I don’t have anything clever to say about his story, but I loved it so much. One more cup of coffee, if that’s what it’s going to be.

The other extremely useful thing about being out here was talking to local journalists. It’s easy to dismiss local journalism — a lot of it is pretty bad, and the consolidation of news sources has made a lot of it less “local” than it used to be. But the ones I met here knew a hell of a lot more about this story than most of the national news sources I read. Eliza Larson, of KITV, was part of the conference the entire time, and her knowledge and perspective were crucial. We also visited the office of Honolulu Civil Beat, and they were also great. (And one of them, not knowing my relation to it, described the NUKEMAP as an “authoritative tool” that they found very useful, which of course I delighted in.)

The alert interface used by HI-EMA. It's terrible.

The alert system used by HI-EMA, per Honolulu Civil Beat. It’s a bad interface, no matter how you slice it. The first option, “BMD False Alarm,” was added after the false alarm incident.

One thing that emerged for me is that the narrative of “what went wrong” is still not quite known. The first draft of the story, which most people believe, is that an employee clicked the wrong button on the alert website. This is absurd enough to be believable, and the “lesson” of it is clear: user interfaces matter, a conclusion that resonated very strongly with the “human factors engineering” analysis that became very popular following its application in the post-mortem of the Three Mile Island accident.

But that turns out not to be what happened, as emerged later. Two different versions have been put out. The “button pusher,” we’ll call him, later told journalists that he, in fact, had not done it accidentally, but that he had been told it was real and he believed it was real. Which turns it into a very different sort story: one about miscommunication, human error, and a system problem that makes it very easy for an alert to be sent (by a single person), but not to be rescinded.

The other later version, put out by HI-EMA officials, is that the aforementioned “button pusher” was in fact an unreliable, unstable person who had displayed personality problems in the past and totally “shut down” after sending the alert. The “button pusher” disputes this version of events quite vigorously, we were told, and no documentation has been provided by HI-EMA to substantiate this account. If this version is true, the story is about human reliability, along with the aforementioned system problem. (The “button pusher” was fired from HI-EMA shortly after, and was the target of considerable ire by an understandably furious public.)

Both HI-EMA and the “button pusher” have self-interested reasons for preferring their versions of the story, as it shifts the blame considerably. Either way, the system failures remain: a single individual, whether by confusion or by malice, should not be able to send out a false alarm by themselves to thousands of people.3

The HI-EMA official emblem: a Civil Defense (CD) logo rising out of an erupting volcano, while a massive wave menaces from the right, and wind blows trees on the left, the entire thing ringed in shark teeth. Seriously.

The Hawaii Emergency Management Agency (HI-EMA) emblem. Tell me this isn’t the most amazing piece of graphic design ever. Civil Defense! Volcano! Tsunami! Hurricane! SHARK TEETH!

The grim irony is that Hawaii was being extremely proactive when it came to the possibility of a ballistic missile threat. They’re not wrong to think that it should be in their conception of the possible risks against them. They appear to have been one of the only statewide emergency management agencies that had worked to reintroduce nuclear weapons threats into their standard alert procedures and drills.4 They set up a system that, by any measure, contained terrible flaws, ones that any outside analyst could have seen.

And we were told that a consequence of this false alarm, aside from the panic, fear, and confusion (of such magnitude that may have caused at least one heart attack), we were told, is that Hawaii seems to have put its ballistic missile alert system on an indefinite hold. Which is understandable, but unfortunate. Because the nuclear threat, including the ballistic missile threat, is still a real one. It will continue to be a real one as long as there are hostile states with nuclear-tipped ballistic missiles — which seems like it might be a very long time indeed. HI-EMA was right, I think, in making ballistic missile threats part of their “threat matrix” of possibilities that they, as the organization tasked with preserving the lives of their citizens, were tasked with addressing. But they also had a responsibility to set up a system in which false positives would be very unlikely, and they utterly failed at that. The consequence is that not only are the residents of Hawaii less prepared than they had previously been for the possibility of a nuclear attack (which if you think is so remote a risk, read Jeffrey Lewis’s novel, the The 2020 Report, and get back to me), but other state governments are probably going to continue to be shy about taking nuclear risks seriously, for fear of the terrible publicity that comes with getting it wrong.

Dispatched from a mai tai bar at the Honolulu airport, waiting for a red-eye flight. Please chalk up any typos to the mai tai. Expect blog posts somewhat more regularly in 2019. 

  1. Though the flight was pretty good, to be honest. The agency that had set up my ticket had booked me in practically the last seat on the plane, and I was not looking forward to that. But for whatever reason, when I went to check in and get my boarding pass the night before, I was offered a chance to upgrade to first class for only $299. Which is pretty amazing in any circumstances, but for an eleven-hour flight it felt foolish to pass it up. And so I didn’t, and had a very nice flight. The real perk of first class not the better food and nicer flight attendants — though those were nice — but was the fact that my seat turned into a totally flat bed. That was pretty amazing, and my first time being able to experience that on a plane. It makes a huge difference. []
  2. Among its many materials, the exhibit prominently featured a racist editorial cartoon by Theodor Geisel — Dr. Seuss — that ridiculed the notion of a Japanese attack. UCSD has a nice collection of his wartime cartoons, and I was particularly struck that he published many on the theme of how unlikely an attack was, with the last published only two days before the surprise attack. []
  3. As an aside, I have not been able to figure out what order of magnitude of people receive the alert. “Thousands” is conservative. People on islands 100 miles away from Oahu got the alert as well. A Congressional Research Service backgrounder on the incident says that HI-EMA attempted to stop transmission within minutes, so it may not have reached the full intended audience. Some people said that their phone got the alert, but their spouses’ phone did not. “Tens of thousands” is probably conservative. “Hundreds of thousands,” upward to a million, might be a possibility, but I don’t know. []
  4. I took a tour of the Massachusetts Emergency Management Agency not long after the alert, and was told by the head of the organization that they did not have any plans whatsoever in place for a nuclear weapon detonation, much less a ballistic missile attack. This was kind of amazing to hear, especially since MEMA is located in an underground bunker constructed in the 1960s to survive a nuclear attack. []
Redactions

Cleansing thermonuclear fire

by Alex Wellerstein, published June 29th, 2018

What would it take to turn the world into one big fusion reaction, wiping it clean of life and turning it into a barren rock? Asking for a friend.

Graphic from the 1946 film, “One World Or None,” created by the National Committee on Atomic Information, advocating for the importance of the international control of atomic energy.

One might wonder whether that kind of question presented itself while I was reading the news these days, and one would be entirely correct. But the reason people typically ask this question is in reference to the story that scientists at Los Alamos thought there was a non-zero chance that the Trinity test might ignite the atmosphere during the first wartime test.

The basic idea is a simple one: if you heat up very light atoms (like hydrogen) to very high temperatures, they’ll race around like mad, and the chances that they’ll collide into each other and undergo nuclear fusion become much greater. If that happens, they’ll release more energy. What if the first burst of an atomic bomb started fusion reactions in the air around it, say between the atoms of oxygen or nitrogen, and those fusion reactions generated enough energy to start more reactions, and so on, across the entire atmosphere?

It’s hard to say how seriously this was taken. It is clear that at one point, Arthur Compton worried about it, and that just the same, several scientists came up with persuasive reasoning to the effect that this could not happen. James Conant, upon feeling the searing heat of the Trinity test, briefly reflected that maybe this rumored thing had, indeed, come to pass:

Then came a burst of white light that seemed to fill the sky and seemed to last for seconds. I had expected a relatively quick and bright flash. The enormity of the light and its length quite stunned me. My instantaneous reaction was that something had gone wrong and that the thermal nuclear [sic] transformational of the atmosphere, once discussed as a possibility and jokingly referred to a few minutes earlier, had actually occurred.

Which does at least tell us that some of those at the test were still joking about it, even up to the last few minutes. Fermi reportedly took bets on whether the bomb would destroy just New Mexico or in fact the entire world, but it was understood as a joke.1

The introduction of the Konopinski, Marvin, and Teller paper of 1946. Filed under: “SCIENCE!

In the fall of 1946, Emil Konopinski, Cloyd Marvin, and Edward Teller (who else?) wrote up a paper explaining why no detonation on Earth was likely to start an uncontrolled fusion reaction in the atmosphere. It is not clear to me whether this is exactly the logic they used prior to the Trinity detonation, but it is probably of a similar character to it.2 In short, there is only one fusion reaction based on the constituents of the oxygen that had any probability at all (the nitrogen-nitrogen reaction), and the scientists were able to show that it was not very likely to happen or spread. Even if one makes assumptions that the reaction was much easier to initiate than anyone thought it was likely to be, it wasn’t going to be sustained. The reaction would cool (through a variety of physical mechanisms) faster than it would spread.

This is all a common part of Manhattan Project lore. But I suspect most who have read of this before have not actually read the Konopinski-Marvin-Teller paper to its end, where they end on a less sure-of-themselves note:

There remains the distant possibility that some other less simple mode of burning may maintain itself in the atmosphere.

Even if the reaction is stopped within a sphere of a few hundred meters radius, the resultant earth-shock and the radioactive contamination of the atmosphere might become catastrophic on a world-wide scale.

One may conclude that the arguments of this paper make it unreasonable to expect that the N+N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable.

That’s not quite as secure as one might desire, considering these scientists were in fact working on developing weapons many thousands of times more powerful than the Trinity device.3

The relevant section of the Manhattan District History (cited below) interestingly links the research into the “Super” hydrogen bomb with the research into whether the atmosphere might be incinerated, which makes sense, though it would be interesting to know how closely linked these questions where.

There is an interesting section in the recently-declassified Manhattan District History‘s that discusses the ignition of the atmosphere problem. They repeat essentially the Konopinski-Marvin-Teller results, and then conclude:

The impossibility of igniting the atmosphere was thus assured by science and common sense. The essential factors in these calculations, the Coulomb forces of the nucleus, are among the best understood phenomena of modern physics. The philosophic possibility of destroying the earth, associated with the theoretical convertibility of mass into energy, remains. The thermonuclear reaction, which is the only method now known by which such a catastrophe could occur, is evidently ruled out. The general stability of matter in the observable universe argues against it. Further knowledge of the nature of the great stellar explosions, novae and supernovae, will throw light on these questions. In the almost complete absence of real knowledge, it is generally believed that the tremendous energy of these explosions is of gravitational rather than nuclear origin.4

Which again is simultaneously reassuring and not reassuring. The footing on which this knowledge was based was… pretty good? But like good scientists they were happy, at least in secret reports, to acknowledge that there might in fact be ways for the planet to be destroyed through nuclear testing that they hadn’t considered. Intellectually honest, but also terrifying.

The ever relevant XKCD.

This issue came up again prior to the Operation Crossroads nuclear tests in early 1946, which was to include at least one underwater shot. None other than Nobel Prize-winning physicist Percy Bridgman worried that detonating an atomic bomb under water might ignite a fusion reaction in the water. Bridgman admitted his own ignorance into nuclear physics (his area of expertise was high-pressure physics), but warned that:

Even the best human intellect has not imagination enough to envisage what might happen when we push far into new territory. … To an outsider the tactics of the argument which would justify running even the slightest risk of such a colossal catastrophe appears exceedingly weak.5

Bridgman’s fears weren’t really that the world would be destroyed. He worried more that if the scientists appeared to be cavalier about these things, and it was later made public that their argument for the safety of the tests was based on flimsy evidence, that it would lead to a strong public backlash: “There might be a reaction against science in general which would result in suppression of all scientific freedom and the destruction of science itself.” Bridgman’s views were strong enough that they were forwarded to General Groves, but it isn’t clear whether they resulted in any significant changes (though I wonder if they were the impetus for the write-up of the Konopinski-Marvin-Teller paper; the timing kind of works out, but I don’t know).

There isn’t a lot of evidence that this problem concerned the scientists too much going forward. They had other things on their mind, like building thermonuclear weapons, and it quickly became clear that starting a large fusion reaction with a fission bomb is hard. Which is, in its own way, an answer to the original question: if starting a runaway fusion reaction on purpose is difficult, and requires very specific kinds of arrangements and considerations to get working even on a (relatively) small scale, then starting one in the entire atmosphere, is likely to be impossible.

Operation Fishbowl, Shot Checkmate (1962) — a low yield weapon, but something about its perfect symmetry and the trail of the rocket that put it into the air invokes the idea of a planet turning into a star for me. Source: Los Alamos National Laboratory.

Great — cross that one off the list of possibilities. But it wouldn’t really be science unless they also, eventually, re-framed the question: what conditions would be required if we were to try and turn the entire planet into a thermonuclear bomb? In 1975, a radiation physicist at the University of Chicago, H.C. Dudley, published an article in the Bulletin of the Atomic Scientists warning of the “ultimate catastrophe” of setting the atmosphere on fire. This received several rebuttals and a lot of scorn, including one in the pages of the Bulletin by Hans Bethe, who had previously addressed this question in the Bulletin in 1946. Interestingly, though, Dudley’s main desire — that someone re-run these calculations on a modern computer simulation — did seem to generate a study along these lines at the Lawrence Livermore National Laboratory.6

In 1979, Livermore scientists Thomas A. Weaver and Lowell Wood (the latter appropriately a well-known Edward Teller protege) published a paper on “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” which is a jargony way to ask the question in the title of this blog post. Here’s the abstract:

The basic conditions for the initiation of a nuclear-detonation wave in an atmosphere having plane symmetry (e.g., a thin, layered fluid envelope on a planet or star) are developed. Two classes of such a detonation are identified: those in which the temperature of the plasma is comparable to that of the electromagnetic radiation permeating it, and those in which the temperature of the plasma is much higher. Necessary conditions are developed for the propagation of such detonation waves for an arbitrarily great distance. The contribution of fusion chain reactions to these processes is evaluated. By means of these considerations, it is shown that neither the atmosphere nor oceans of the Earth may be made to undergo propagating nuclear detonation under any circumstances.7

Now if you just read the abstract, you might think it was just another version (with fancier calculations) of the Konopinski-Marvin-Teller paper. And they do rule out conclusively that N+N reactions would ever be energetic enough to be self-propagating. But it is far more! Because unlike Konopinski-Marvin-Teller, it actually focuses on those “necessary conditions”: what would need to be different, if you did want to have a self-propagating reaction?

The answer they found: if the Earth’s oceans had twenty times more deuterium than they actually contain, they could be ignited by a 20 million megaton bomb (which is to say, a bomb with the yield equivalent to 200 teratons of TNT, or a bomb 2 million times more powerful than the Tsar Bomba’s full yield). If we assumed that such a weapon had even a fantastically efficient yield-to-weight ratio like 50 kt/kg, that’s still a device that would weigh around a billion metric tons. To put that into perspective, that’s about ten times more mass than all of the concrete of the Three Gorges Dam.8

So there you have it — it can be done! You just need to totally change the composition of the oceans and need a nuclear weapon many orders of magnitude more powerful than the gigaton bombs dreamed of by Edward Teller, and then, maybe, you can pull off the cleansing thermonuclear fire experience.

Which is to say, this won’t be how our planet dies. But don’t worry, there are plenty other plausible alternatives for human self-extinction out there. They just probably won’t be as quick.


I am in the process of finishing my book manuscript, which is the real job of this summer, so most other writing, including blogging, is taking a back seat for a few months while I focus on that. The irreverent title of this post is taken from a recurring theme in the Twitter feed of anthropology grad student Martin “Lick the Bomb” Pfeiffer, whose work you should check out if you haven’t already.

  1. This undergraduate paper by Stanford student Dongwoo Chung, “(The Impossibility of) Lighting Atmospheric Fire,” does a really nice job of reviewing some of the wartime discussions and the scientific issues. []
  2. Emil Konopinski, Cloyd Marvin, and Edward Teller, “Ignition of the Atmosphere with Nuclear Bombs,” (14 August 1946), LA-602, Los Alamos National Laboratory. Konopinski and Teller also apparently wrote an unpublished report on the subject in 1943. I have only seen reference to it, as report LA-001 (suspiciously similar to the LA-1 that is the Los Alamos Primer), but have not seen it. []
  3. Teller, in October 1945, wrote the following to Enrico Fermi about the possibility of a “Super” detonating the atmosphere, as part of what was essentially a “Frequently Asked Questions” about the H-bomb: “Careful considerations and calculations have shown that there is not the remotest possibility of such an event [ignition of the atmosphere]. The concentration of energy encountered in the super bomb is not greater than that of the atomic bomb. In my opinion the risks were greater when the first atomic bomb was tested, because our conclusions were based at that time on longer extrapolations from known facts. The danger of the super bomb does not lie in physical nature but in human behavior.” What I find most interesting about this is his comment about Trinity, though Teller’s rhetorical point is an obvious one (overstate the Trinity uncertainty after the fact in order to emphasize his certainty at the present). Edward Teller to Enrico Fermi (31 October 1945), Harrison-Bundy Files Relating to the Development of the Atomic Bomb, 1942-1946, microfilm publication M1108 (Washington, D.C.: National Archives and Records Administration, 1980), Roll 6, Target 5, Folder 76, “Interim Committee — Scientific Panel.” []
  4. Manhattan District History, Book 8, Volume 2 (“Los Alamos – Technical”), paragraph 1.50. []
  5. Percy W. Bridgman to Hans Bethe, forwarded by Norris Bradbury to Leslie Groves via TWX (13 March 1946), copy in the Nuclear Testing Archive, Las Vegas, NV, document NV0128609. []
  6. H.C. Dudley, “The Ultimate Catastrophe,” Bulletin of the Atomic Scientists (November 1975), 21; Hans Bethe, “Can Air or Water Be Exploded?,” Bulletin of the Atomic Scientists 1, no. 7 (15 March 1946), 2; Hans Bethe, “Ultimate Catastrophe?,” Bulletin of the Atomic Scientists 32, no. 6 (1976), 36-37; Frank von Hippel, “Taxes Credulity (Letter to the Editor),” Bulletin of the Atomic Scientists (January 1946), 2. []
  7. Thomas A. Weaver and Lowell Wood, “Necessary conditions for the initiation and propagation of nuclear-detonation waves in plane atmospheres,” Physical Review A 20, no. 1 (1 July 1979), 316-328. DOI: https://doi.org/10.1103/PhysRevA.20.316. []
  8. Specifically, they conclude it would take a 2 x 107 Mt energy release, which they call “fantastic,” to ignite an ocean of 1:300 (instead of the actual 1:6,000) concentration of deuterium. As an aside, however, the collision event that created the Chicxulub Crater (and killed the dinosaurs, etc.) is estimated to have released around 5 x 1023 J, which translates into about 120 million megatons of TNT. So that’s not a totally unreasonable energy release for a planet to encounter over the course of its existence — just not from nuclear weapons. []
News and Notes | Visions

A View from the Deep

by Alex Wellerstein, published May 10th, 2018

One of the several projects that has been keeping me busy for the past two years (!) has finally come to fruition. I haven’t talked about it on here much, but I’ve been a co-curator at the Intrepid Sea, Air, and Space Museum, in New York City, helping develop a new exhibit about the submarine USS Growler. The exhibit, A View From the Deep: The Submarine Growler and the Cold War, opens to the public on May 11.

USS Growler exhibit at the Intrepid

I’ve worked with museums a bit in the past, but never anything quite as intensive and comprehensive as this job. The Intrepid has had the Growler submarine since 1988, and a somewhat small exhibit had been developed to serve as a queuing area for people who wanted to go aboard the submarine. But since this year is the 60th anniversary of the commissioning of the boat, as well as the 75th anniversary of the commissioning of the USS Intrepid, the aircraft carrier that serves as the main space of the museum, it was decided that Growler deserved a new, far more comprehensive exhibit dedicated to it.

What’s interesting about the Growler? The bare basics: The USS Growler is the only surviving example of the Grayback class of submarine, which was the first nuclear-armed submarine class that the United States fielded. Its deployment was relatively short (1958-1964), in part because it was extremely transitional technology. The Growler was essentially a diesel attack submarine that was modified (by adding some awkward hangers onto its nose) to carry the Regulus I nuclear-armed cruise missile, and ran deterrence missions in the Pacific, near Kamchatka peninsula (its target was a Soviet military base at Petropavlovsk).

As a diesel sub, its capabilities were pretty limited. It could stay underwater for the duration of its run, through the use of its schnorkel, but it couldn’t dive deep for very long. The Regulus missiles had extreme limitations: they could only be launched from a surfaced ship, had very limited range on account of their guidance systems (they required active radar guidance all the way to their targets, and the guidance system only had a range of around 225 nautical miles). It was always seen as a partial solution, an entry point for the Navy’s foray into nuclear weapons.

The USS Growler on a full-speed trial run, November 1958. The bulbous bow contains hangers that contained Regulus missiles. The Growler was initially designed as an attack submarine, and the hangers were a modification to turn it into a cruise missile submarine. Source: NARA Still Pictures, College Park, MD.

So this was not an ideal boat by any means — in a war situation, it would need to surface, ready the missile to launch (in whatever conditions the sea was giving it), and then during the entire period that the sub-sonic missile made its way to its target it would be effectively broadcasting its position to whomever happened to be listening. And here’s a real bonus: if a Soviet plane or boat happened to destroy the Growler while the missile was in flight, they would be effectively disabling the missile. So unsurprisingly many on the Growler saw the use of their most potent weapon as a sort of personal suicide pact — perhaps an apt metaphor for the nuclear age in general.

Given these limitations, and the fact that the Grayback class was phased out in favor of the much more useful Polaris submarines (which were nuclear powered, and could fire ballistic missiles while submerged), it’s easy to ignore them. But as historians of technology often emphasize, we often learn as much from “failed” technologies as we do from “successful” ones. The Grayback class of submarines were seen as temporary. They were the US Navy’s first real foray into an underwater nuclear capability, designed to be fielded fast. The sub and the missile both reflect this expediency to their core.

The exhibit works to both explain the development and capabilities of the submarine and the missile, but also to contextualize them within the broader context of the early Cold War. Anyone who attends the exhibit ought to see my intellectual fingerprints all over it: it is an exhibit about the inseparability of technical developments and their political and historical contexts.

Regulus missile profile, July 1957. The censored word (the little line of dots) is “Atomic,” as a differently-redacted version indicates. This was when it was planned to use the W-5 warhead; the missile was later modified to carry the thermonuclear W-27 warhead. Source: Office of the Secretary of Defense, “The Guided Missile Program” (July 1957), Eisenhower Library, copy from GaleNet Declassified Documents Reference System.

The Intrepid Museum has a great team of exhibit curators and staff (a special shout out to my main collaborators Elaine Charnov, Jessica Williams, Chris Malanson, Kyle Shepard, and Gerrie Bay Hall). And, an aside, their offices are inside the aircraft carrier, deep within the steel hallways that are inaccessible to the public. Which makes sense in retrospect (museum space is always limited, so of course the offices would be kept within the cavernous carrier), but hadn’t occurred to me prior to seeing them. It’s a pretty unusual work environment from a physical standpoint — steep stairs, enough steel to kill your cell reception completely, unlabeled and winding passages, and very unusual acoustics as noises move through the entire hull.) In my usual job, I’m not usually a worker on large teams (e.g., more than three or four people); for a museum of the size of the Intrepid, there were maybe half a dozen people I regularly talked with, and another half a dozen more who I occasionally intersected with.

My job was to help with the broader conceptualization, aiding with the overall research (including a trip to NARA to digitize the Growler’s “muster rolls,” giving us a record of nearly everyone who served on the ship), much of the exhibit text (which of course had to be carved down quite a bit from my word-count-busting original drafts), and aiding in the choosing and creating of the visualizations. I also put them in touch with my colleagues at the Stevens SCENE Lab, who developed some pretty interesting audio-haptic interactives for the exhibit, including a virtual sonar station and a magical vibrating box that gives you a sense of what it would have sounded and felt like to be on an operating submarine. We wanted to make sure that people who couldn’t or didn’t want to go on board the submarine itself could get some sort of sense of its lived experience from the exhibit (the submarine is understandably somewhat cramped and features tiny hatches every so many feet, so people with mobility issues may not be able to go aboard it).

More generally, in the exhibit we tried to situate Growler within a broader past (going back to the developments of atomic bombs, cruise missiles, and modern submarines in World War II), but also its future (the creation and evolution of the nuclear triad). It is an exhibit that tries to do a lot of intellectual work (and if it gets reviewed as trying to do too much, well, you know who to blame) in a subtle way, painting a picture (one that the readership of my blog is probably more familiar with than the average person) of the kinds of forces and mindsets that were at work in the Cold War, and the way in which the politics, technology, and historical context mutually affected one another. There’s no simple good/bad message here; we’re hoping that visitors will leave with new questions about the history of nuclear weapons and the Cold War lodged in their heads.

Malformed muster roll for the USS Growler from 1963, courtesy of NARA. Not much you can do with that other than admire its strange beauty.

I also adapted a version of the NUKEMAP for use on museum exhibit touch screens, which I’m pretty happy with — aside from the aesthetics of it, which I think look pretty good, and some clever technical bits (it has some nice features to try and mitigate loss of connectivity issues that will inevitably come up), I’m especially pleased that from the very beginning the museum has been onboard with making sure that we talk about what the physical and human consequences of using the Regulus missile would have been. It would have been an easy thing to gloss over (because it can make people uncomfortable), but everyone agreed that you really couldn’t talk about this technology honestly without talking about what it would actually do if it was used.1

From a research perspective, the most rewarding thing was finally, after a lot of searching, finding a photograph that gave me an idea of what the W-27 warhead looked like. Warhead shapes can tend to be kept pretty close by the government, because they are frequently revealing of the internal mechanisms, and the W-27 was especially tricky because it was produced in unusually limited numbers. It was a “conversion kit” for the B-27 nuclear bomb, used only on the Regulus in the end, and so only 20 of them were ever produced. The breakthrough was realizing that there are many reports produced by Sandia National Laboratories that were investigating the structural integrity of not the warhead itself, but the carts and containers that would transport it. When they did these tests they would use a dummy warhead mockup that was the right shape and size of the warhead — thus providing far more pictures than I needed in the end. In the end, like most secrets, the warhead’s shape is mostly uninteresting on its own terms (it looks like a thermos with a firing unit bolted to the end of it), but there’s something so satisfying in being able to see it, and more or less figure out how it probably fit inside the Regulus.

A “dummy warhead” of the W-27, which gives one a pretty good idea of what it looked like. I suspect the electronics and arming unit are contained in the part labeled “FWD. END” here. Source: “Drop Test of the H-525 Without Shear Pads,” Sandia National Laboratories (July 1958).

As a collaboration, I thought it was extremely fruitful, and it was an interesting and exciting challenge to see how I would translate my broader interests in the history of nuclear weapons and the Cold War and turn them into something that would be accessible and intellectually stimulating to the general audience museum crowd.

The exhibit will be running for at least the next year, but is slated to be more or less permanent (things get complicated for long-term planning when your exhibit is on a building on top of a New York City pier, I have gathered), so if you’re in Manhattan, please feel free to stop by and take a look. It’s not just a first-generation nuclear-armed submarine that you can walk through, it’s also a deep dive (pun intended) into the Cold War context that led to the creation and use of such a weapon, and a meditation on the peril and value of imperfect solutions.

  1. If you work in a museum and think a NUKEMAP interactive would be useful, get in touch. The new “NUKEMAP Museum” framework is pretty flexible and could be adapted to a lot of different exhibits. []