Posts Tagged ‘NUKEMAP’

News and Notes

The Reinventing Civil Defense project

Thursday, July 13th, 2017

This has been one busy academic year for me, and the non-stop news cycle has not helped matters. As is painfully obvious by my decreased production of blog posts. Don’t worry — I’m not going anywhere, and I will make up some of the difference in August when I visit Japan for the first time, in time for the Hiroshima bombing anniversary. Below is a description of one of the projects that has been occupying my time these last many months.

I am extremely pleased to be able to announce one bit of “secret” work that is finally going public: a sizable grant that I am involved with has been chosen for funding by the Carnegie Corporation of New York. It is one of 11 projects funded by a joint effort from the Carnegie Corporation of New York and the John D. and Catherine T. MacArthur Foundation “to support projects aimed at reducing nuclear risk through innovative and solutions-oriented approaches.”

The project is called “Reinventing Civil Defense,” and it’s been fun to tell people about the proposal and watch their eyes get very wide at the name. That’s intentional. When CCNY and MacArthur put out their call for proposals, they said they wanted new ideas, things from outside the box. So we decided to try and go pretty big in that direction.

Bendix dosimeters, for tracking personal radiation exposure. The two Civil Defense photos accompanying this post were all taken by me in order to illustrate the Reinventing Civil Defense website, from various Cold War bits and pieces I have lying around the apartment. I wanted something that nodded at the Civil Defense imagery we are familiar with, but also indicated that this was going to be a new take on it.

The “we” here is our team of co-PIs at the Stevens Institute of Technology, myself, Kristyn Karl, and Julie Pullen.  Together, we created and will run the Reinventing Civil Defense project, with key contributions from Ed Friedman. Kristyn Karl is a political psychologist who works across the hall from me, in the College of Arts and Letters, and whose work involves studying how people evaluate risk, especially in response to communications about it. She has researched the ways in which people evaluate different types of reporting about terrorism, and how that impacts their emotional responses and subsequent policy support (or lack thereof). Julie Pullen is an associate professor of oceanography and meteorology who I have known since I came to Stevens, who works in the School of Engineering and Science, who has done a lot of research into port and maritime security in the New York City area, and has studied technical issues relating to nuclear terrorism. Ed Friedman is an emeritus professor of physics, and one of the reasons I am at Stevens in the first place: it was Ed whose initial interest in my work brought me here to give a talk, at which point I not only realized where Hoboken was (I grew up on the West Coast, so my East Coast geography was pretty poor), but learned there was a job search going on in my field. Ed has had one of those lives that looks so jam-packed with interesting and important work (as a sample, he worked in Afghanistan for many years before the Soviet-Afghan war, teaching at the engineering school in Kabul) that no matter what one accomplishes, one feels like one has done almost nothing, but he is a generous and concerned scholar who is deeply interested in matters relating to nuclear weapons and terrorism.

Kristyn Karl, Julie Pullen, myself (Alex Wellerstein), taking a somewhat awkward picture (the weather was not entirely behaving) at Castle Point Lookout at the Stevens Institute of Technology.

Ever since I took the job at Stevens, Ed, Julie, and I had been talking about ways in which we could leverage the essential principles and success behind something like the NUKEMAP in a way that would have even wider impact. This led to a lot of discussions about how digital tools might produce different ways to think about science and risk communication, beyond the more traditionally “didactic” modes associated with formal education. Study after study has shown that didactic, lecturing approaches to getting information across only works in a very limited way — and as a teacher, it is clear that it is extremely inefficient even within the confines of a formal educational setting (e.g., people who are taking out massive loans with the idea of getting an education). If your goal is to affect a much broader spectrum of people, about pressing policy issues, you have to find another way. Kristyn's work on science communication and risk perception was a natural fit with these interests, and so we brought her into these discussions not long after she was hired at Stevens.

Around the time of the Carnegie/MacArthur request for proposals (October 2016), I had been thinking about Civil Defense quite a lot. Ed and I were co-teaching a seminar on nuclear policy topics, and had dedicated a week to the subject, having the students (and ourselves) read various Civil Defense texts and critiques from a few different “eras” of US Civil Defense work. I had looked into a lot of these issues when designing the codes for the NUKEMAP (which are still being worked on, as an aside; there will be some interesting new features added in the very near future), and it seemed like there was a lot of discussion of this issue “in the air” then (and even more since). And, when I lived in DC, I had some very productive discussions with my friend Ed Geist (now at RAND Corporation; we recently co-authored an article on the Soviet H-bomb project in Physics Today), who wrote his dissertation on US and Soviet Civil Defense policies. The general feeling I had about Civil Defense was, some of it was nonsense (the quick evacuation of big urban centers always seemed infeasible), some of it certainly expressed a blasé approach to mass destruction, but it was not as crazy as the anti-nuclear activists often made it out to be, and indeed many of its core approaches have been integrated into preparation for other kinds of major hazards (Civil Defense eventually morphed into Emergency Management, which takes a somewhat different approach with regards to engaging the general public). It seemed highly politicized and polarized, by both the anti-nuclear and pro-nuclear folks (having Edward Teller be ones of its chief advocates was not going to “bridge that gap,” either).

A Victoreen radiation detector. The units of this are pretty high — it's not meant to budge unless you're in a bit of trouble. To get it at something other than a zero read I did a circuit check and let it work its way back down again.

So when I was thinking about the Carnegie/MacArthur request, suddenly this idea flashed in my brain (in the way of all of my ideas, both good and bad, it just appeared all at once): what if Civil Defense wasn’t politicized and wasn’t dumb? What if you approached it in a truly even-handed, non-partisan way? What if you thought very seriously about the deficiencies of Cold War Civil Defense, notably its approach to messaging, and thought about what that would look like in the early-21st century, where the more probable nuclear threat is not the multi-megaton, thousands-of-targets exchange of the late-20th century, but single-use detonations of terrorists or so-called “rogue states”? What would that look like? What would it look like if your approach was not the government producing lectures and pamphlets (because American trust in government has notably plummeted from the late 1960s onward), but non-governmental organizations producing digital products and tools?

And, of course, what would be gained from this approach? Potentially much, for people of all political stripes. Those who believe that Civil Defense should be embraced because it would lessen the consequences of a nuclear detonation (and if risk is probability times consequences, then you are reducing the risk by doing this) would be pleased by the reduction of preventable casualties that might come with such an effort. Those who are more concerned with galvanizing public opinion about nuclear weapons would, perhaps, be pleased that the lived experience of nuclear risk — nuclear salience — would be increased, in a way that it has not been since the height of the Cold War. It is my belief, and I will have a piece about this coming out pretty soon, that the elimination of Cold War Civil Defense education ironically allowed nuclear weapons to pass out of public awareness, which was certainly not what the people opposed to Civil Defense were interested in.

The logo of the Federal Civil Defense Administration, from the side of the aforementioned Victoreen detector.

And on top of all that, this kind of project would create an opportunity to explore new kinds of risk communication and messaging (with new media, like Virtual Reality), and its effectiveness (which someone like Kristyn designs experiments to test). So at its most ambitious, this project is about potentially altering American nuclear culture (and maybe non-American, ideally, but you’ve got to start somewhere), and potentially facilitating the means to save thousands of preventable casualties in the event of a nuclear detonation. And even if those very lofty goals are not possible to be achieved (changing culture is obviously a very difficult thing!), it could still be a catalyst for a lot of interesting prototypes.  Much of our budget is earmarked for sub-awards that will generate “deliverables” meant to be focal points for these conversations about nuclear salience (think VR apps, games, graphic novels, along with more traditional output like studies and whitepapers and reports), and two workshops where we will hash over these questions and come up with some recommendations (the workshops are invitation-only, but if you are interested please get in touch and we'll see what we can accommodate within our space and budget).

Ed, Julie, Kristyn, and I bounced this idea around, to great effect. The germ evolved into a full-fledged proposal. We also decided that we would need some kind of Advisory Committee to help make sure that we weren’t barking up the wrong tree, and to give us perspectives that a bunch of engineering-school professors might not have. You can see the list of the Advisory Committee members on our project website — I’m pretty amazed at the people we were able to convince to agree to be part of this project, and just getting them all together in a room, talking about this issue, will no doubt be an interesting conversation.

"Fallout protection: What to know and do about nuclear attack," was a pamphlet created in 1961, intending to spread the word about fallout shelters and radiation protection. Aside from having some pretty interesting graphics (which always brings things to my attention), and being printed in apparently huge numbers, it is notable to me in part because it was one of the few Civil Defense messaging techniques that was actually studied by social scientists at the time, to see how it changed people's views and understanding on fallout. You can buy well-preserved originals of it on eBay for a song.

Anyway, after various rounds of peer review and discussion, we finally got notice that we were funded, though we had to keep it under wraps until all of the coordination between the foundations was completed. I am pleased to be able to reveal it all now, at long last, and to promise that you will be seeing many interesting things coming out of this work in the near future. And if you know of someone whose work might fit into the category of a good project to fund, please send them the website link and tell them to be in touch (or get in touch yourself, if the person is you) — we are going to try and make the application/funding process as streamlined as possible, with a minimum amount of red tape, if we can.

To explicitly invoke Civil Defense — with full recognition of its controversy, its complications, and its ups and downs — was, as I indicated earlier, a very deliberate move. I’m well aware it is a polarizing subject, and the looks my colleagues and friends have given me when I tell them the name of what we’re working on have been... interesting. But I think that approaching nuclear risk through this lens will be productive and stimulating, and I also think that we live in a time when it is time to re-think, and re-invent, our approaches to these issues. And I’m grateful the funders and our peer reviewers agreed!

I just want to finish this note by thanking my three collaborators (Ed, Julie, Kristyn), the Carnegie Corporation of New York (esp. Carl Robichaud), the members of our all-star Advisory Committee who agreed to have their names attached to such an unusual venture, the N Square Collaborative (esp. Erika Gregory, whose efforts at getting nuclear people to network outside of their normal groups are deeply reflected in the makeup of our Advisory Committee and our approach in general),  Alex Glaser at Princeton (whose team also got one of the grants, and who helpfully shared ideas and thoughts with me during the process), and my ever-supportive Dean, Kelland Thomas, who is not just an impressively capable administrator, but has some pretty impressive musical chops.

Visions

NUKEMAP at 5 years

Friday, February 3rd, 2017

Five years ago today I introduced the NUKEMAP. It feels practically like yesterday — how fast that has flown! I occasionally get college students, not even brand new ones, who tell me that they used it in high school to do reports. That makes me feel... well, like I've contributed something, along with feeling old. So that's not bad. I've been behind on posting for awhile now, and am behind on several things at the moment (lots of irons in the fire, plus the debilitating power of a news cycle that seems to change by the minute), but I wanted to put up something about the NUKEMAP.

NUKEMAP and NUKEMAP3D page views, exported from Google Analytics and cleaned up a bit, with a few of the "known" moments of virality indicated. Note how the "baseline" had steadily increased over time.

NUKEMAP and NUKEMAP3D page views, exported from Google Analytics and cleaned up a bit, with a few of the "known" moments of virality indicated. Note how the "baseline" had steadily increased over time.

Some statistics: NUKEMAP has been the host of over 99 million virtual detonations, according to its internal logs. Every detonation, except for ones where people have opted-out of logging, is logged. As I've said before, I don't record enough information for it to be non-anonymizing, but it's interesting to see things like where people nuke, and what they do with the tool. According to Google Analytics, there have been (as of this checking) over 25 million pageviews, over 20 million of those unique pageviews (e.g., not people coming back and using it multiple times in one session). The usage of the site predictably flares up in certain moments of "virality" (for the 70th anniversary of Hiroshima, over 500,000 people used it over two days), and still have sharp moments of heavy traffic every few months. More interesting and important to me is that the site's "slow days" are now not so slow. When it started, a "slow day" was a few thousand people using it. Today, it's more like 15,000-20,000 people using it. And, for the most part, people are really using it: the average time on page is 5 minutes, which I think is pretty healthy for a web visualization used by tens of thousands of people a day. That means people are doing more than just clicking and glancing — they're actually trying things out.

NUKEMAP3D is, for the moment, moribund. Google unceremoniously discontinued support for the Google Earth Web Plugin (the code on their end is just kaput), and no adequate substitute has yet emerged. There are some ways of crudely rendering a 3D planet on the web, but none that support buildings and skylines the way Google Earth did, and that is the whole point of NUKEMAP3D. However, I am developing a temporary substitute which is almost ready to roll out: it will allow you to export any NUKEMAP settings to a KMZ file which you can open in the Google Earths standalone program, and it will support mushroom clouds among other interesting features.

"Alas, poor NUKEMAP3D! I knew him, Horatio..." Don't worry, NUKEMAP3D isn't really dead, just waiting for better circumstances...

Some reflections: I still remain surprised that NUKEMAP has been as popular as it was. The idea of drawing concentric circles over a map is not a new one, and mine was not even the first web one. Heck, it wasn't even the first web one for me — in 2005 or so I made a terrible crude version using MapQuest (remember them?) and PHP, and it wouldn't have been sustainable to use (it literally used PHP to draw circles over static images from MapQuest, so it was very server-intensive by the standards of the day). But I did try to make a version that was easier to use than any of the other ones that were out there, and gave more intuitive, useful information. And when I upgraded NUKEMAP in the summer of 2013, I really did think it was contributing new possibilities: much more flexible detonation options, casualty estimates, a fallout model.

I still give talks about NUKEMAP all the time, whether to large groups (I was on a panel with Noam Chomsky a few years ago, talking about NUKEMAP), or to individual reporters (I did another interview on it just yesterday), or to small groups of students (I Skyped into a high school class a few weeks ago to talk about it, and how it was made, and how these students should not think of it as something beyond their capabilities to put together, something I don't mind doing if I can make the time for it). I teach a course regularly ("Visualizing Society," a sort of anarchistic data visualization/science and technology studies course) where I show students how to build NUKEMAP-like applications for other sorts of social phenomena. I still make updates and plans for updates to it: there are several projects in the works, including "refreshing" the interface a bit (don't worry, it won't end up looking painfully "trendy"; the blog could probably use a refresh, too), translating it into other languages (which requires more back-end coding than you might expect), and adding new substantive features (I have almost put the final touches on a nuclear burning model and better support for multiple detonations).

For me, the "holy grail" would be something that would let you see something like the Defense Civil Preparedness Agency made in 1973: a "personalized" view of what different damage looked like, from the street level. The technology for this isn't quite here yet, but it's not that far away, either.

For me, the "holy grail" would be something that would let you see something like the Defense Civil Preparedness Agency made in 1973: a "personalized" view of what different damage looked like, from the street level. The technology for this isn't quite here yet, but it's not that far away, either.

I have a very long "wish list" of things that would be interesting to add: EMP features, a dynamic (time-sensitive) fallout model, support for the effect of terrain or dampening by buildings, and so forth. I do have some students who occasionally work for me, especially in the summer, on aspects of these issues, and some of this work may eventually make it into future versions of the NUKEMAP. I'm also interested in translating the NUKEMAP concept — this "personalizing" of nuclear weapons effects — into non-web domains as well. The main difficulty here is time: NUKEMAP is still a mostly one-man operation (imagine me in the salt mines, toiling out Javascript), and this one-man is (to his delight) admirably busy with a lot of things. I am very positively supported in this work by my university, I should say, and the College of Arts and Letters at the Stevens Institute of Technology has been paying the ever-increasing fees associated with running a popular website since I got here, and encouraging me to do even more with it.

I suppose one thing that I'm grateful for is that I'm not yet even slightly bored with any of it — I still find talking about it interesting, I still find it a model of how we might consider science communication to look in our present age. I strongly believe, and will evangelize about to anyone who asks me to (as many have found, probably without realizing what they were getting into), that there is something different about providing a sort of "simulation" to a user and saying, well, you figure out how this works, as opposed to a more didactic mode of education like lecturing. This has strong shades of "active learning," but I'm not just talking about an approach to the classroom. One nice thing about tools like NUKEMAP is that I can see (through referring links) how people are using them. My favorite example, and this comes up all the time, is when people use it to argue with other people on the Internet. Someone will say, wouldn't a nuclear bomb do X? And someone says, well, the NUKEMAP says it will be more like Y. And there's this kind of "calibration" of understanding, as I think of it, that starts to narrow down what these weapons do and don't do. (And it goes both ways: most people think they are more powerful than they are, but some think they are less powerful.) The NUKEMAP model, as I discuss in its FAQ, isn't perfect by any means: in some circumstances it probably overestimates the effects (by not taking into account a lot of local variables), in others it probably underestimates them, and the "real world" is much more chaotic than a simple model that can run in your browser can account for, no doubt. But it helps to concretize the experience, the order of magnitude. I think there's a lot of value in that, when we're talking about something so removed from everyday human experience (thank goodness) as a nuclear weapon detonation.

And I think this is a model we need to really do more to export to other domains: nukes are one thing in our society that people have trouble really understanding on an intuitive level, but there are plenty more. This is what my "Visualizing Society" class is all about, at its core: finding ways to make interactive data visualizations or simulations that shed light on complex real-world issues. The technical bar for doing these things is lower than most people realize; if I can teach undergraduates (very good and often technically-inclined undergraduates, to be sure, but often ones with no coding experience) the basics of this over the course of a semester, then it can't be that hard.

The original "NUKEMAP" — Hiroshima, before and after, from the view of a nuclear bombardier.

The original "NUKEMAP" — Hiroshima, before and after, from the view of a nuclear bombardier.

My main frustration with NUKEMAP as a communication tool is that the top-down, concentric-circles approach is the view of the military planner. It's the view of the nuclear targeteer, or as a friend and collaborator put it earlier this week, it's the view of real estate. It's not the view of the person on the ground, it's not the view of the survivor, it's not the view of the victim. NUKEMAP3D did provide some aspects of that, but the Google Earth plugin, for its communicative benefits, was clunky to use (the 3D interface was not straightforward), required a special installation, and it was never as popular as the regular NUKEMAP. (I was, however, still impressed that some 3 million people used it over its lifetime.) I'm hoping that some future projects I have in mind (no spoilers, sorry) will address these issues more directly and more intensely.

Anyway, more is on the horizon, as ever, and it is just a matter of figuring out how to get it all done. More NUKEMAP, more NUKEMAP-like creations, more work. I'm grateful for NUKEMAP: what started out a literally two-day coding job (one resting, of course, on a decade of coding experience, and even some actual code that I had written a long time ago, to be sure) has turned into something of a guiding idea for a career. It definitely increased the popularity of my blog (whose traffic is admirably high for an academic, despite the fact that I am greatly remiss in updating it lately), and became a selling-point for the kinds of hybrid technical-historical-analytical projects that I never knew I had wanted to spend my life working on (though I did have some inklings). Anyway, much more is coming. When I go silent, don't think, "what's happened to him?" Instead, think, "what's he getting ready for us, next?" There's a lot in the pipeline.

Meditations

Hiroshima and Nagasaki at 70

Friday, August 21st, 2015

This month marked the 70th anniversary of the atomic bombings of Hiroshima and Nagasaki, and the cessation of hostilities in World War II. Anniversaries are interesting times to test the cultural waters, to see how events get remembered and talked about. I was exceptionally busy this summer, doing my part to try to participate in the discourse about these events. In case you missed them and wished you had not, here are a few of my appearances:

I also published a second blog post with the New Yorker on the often-overlooked second use of the atomic bomb: "Nagasaki: The Last Bomb." I am proud of it as a piece of writing, as I was really trying to pull off something deliberate and subtle with it, and feel that I somewhat accomplished that.

New Yorker - Nagasaki - The Last Bomb

On this latter piece, I would also like to say that very little of what I wrote would come as a surprise to historians, though the particular arrangement of Nagasaki-as-JANCFU (that is, with an emphasis on the less-than-textbook aspects of the operation, as a herald of the later chaotic possibilities of the nuclear age) is usually under-emphasized. We tend to lump Hiroshima and Nagasaki together when we talking about the atomic bombings during World War II, and I think they should probably be separated out a bit in terms of how we regard them. The first use of the bomb, at Hiroshima, was in many ways a very straightforward affair, both in terms of the strategic and ethical considerations, and the tactical operation. Whether one agrees with the strategic and ethical considerations is a separate matter, of course, but a lot of thought went into Hiroshima as a target, and into the first use of the bomb. Nagasaki, by contrast, was less straightforward on all counts — less thought-out, less justified, and was very nearly a tactical blunder. For me, it reflects on the very real dangers that can occur when human judgment gets mixed with the extremely high stakes that come with weapons as powerful as these. Any bomber crew can have a mishap of a mission, but when that mission is nuclear-armed, the potential consequences multiply.

The one notable exception to the "very little would come as a surprise to historians" bit in this piece is that Nagasaki was never put on the "reserved" list. For whatever reason, the idea that both Hiroshima and Nagasaki were "reserved" from conventional bombing is very commonly repeated, but it is just not true. The final "reserved" list contained only Kyoto, Hiroshima, Kokura, and Niigata. Aside from the fact that no documentation exists of Nagasaki being put on the list (whereas we do have such documentation for the others), we also have the documentation actively rescinding the "reserved" status for Hiroshima, Kokura, and Niigata, so that they could become formal atomic targets.1

Detail from a damage map of Nagasaki, produced by the United States Strategic Bombing Survey, 1946. I have the original of this in my possession. I find this particular piece of the map quite valuable to examine up close — one gets a sense of the nature of the area around "Ground Zero" very acutely when examining it. There were war plants to the north and south of the detonation point, but mostly the labeled structures are explicitly, painfully civilian (schools, hospitals, prisons). Click to enlarge.

Detail from a damage map of Nagasaki, produced by the United States Strategic Bombing Survey, 1946. I have the original of this in my possession. I find this particular piece of the map quite valuable to examine up close — one gets a sense of the nature of the area around "Ground Zero" very acutely when examining it. There were war plants to the north and south of the detonation point, but mostly the labeled structures are explicitly, painfully civilian (schools, hospitals, prisons). Click to enlarge. Here is a not-great photo of the whole map, to compare it with, and here is a detail of the legend. At some point, when finances allow, I will get this framed for my office, but it is quite large and not a cheap endeavor.

John Coster-Mullen's book provided a lot of documents and details about the bombing run. One thing I appreciate about John is his dedication to documentation, even though his views on the meaning of the history are not always the same as mine. I thoroughly believe that rational people can look at the same facts and come up with different narratives and interpretations — the trick, of course, is to make sure you are at least getting the facts right.2

It would be interesting at some point for someone to do a scholarly analysis of the popular discourse surrounding each decade of anniversaries since the bombs were dropped. 1955 was a fairly raw time, right after McCarthyism had peaked and the hydrogen bomb had been developed. 1965 marked an outpouring of new books and revelations from those involved in the bomb project, enabled by new declassifications (allowed, in part, because of the fostering of a civilian nuclear industry) and the fact that some of the major participants (like Groves) were still alive. I have no distinct impressions of 1975 being a major anniversary year, but 1985 resulted in a lot of hand-wringing about the relationship between the birth of the nuclear age and the nuclear fears of the 1980s. 1995, of course, was the first post-Cold War anniversary and one of the "hottest" years of controversy, catalyzing around the Smithsonian's Enola Gay exhibit controversy and the "culture wars" of the mid-Clinton administration. We are still dealing with the hyper-polarization of the narratives of the atomic bombings that became really prominent in the mid-1990s — where there were only two options available, an orthodox/reactionary view or a critical/revisionist view. The 2005 anniversary did not make a large impression on me at the time, and seemed muted in comparison with 1995 (perhaps a good thing), except for the fact that some very noteworthy scholarship made its appearance to coincide with it.

A small sampling of some of the international press coverage of the NUKEMAP around the Hiroshima anniversary.

A small sampling of some of the international press coverage of the NUKEMAP around the Hiroshima anniversary.

And what of 2015? There were, of course, many stories about the bombings. Nagasaki got a better representation in the discourse than usual, in no small part because Susan Southard's Nagasaki: Life After Nuclear War received heavy promotion. (I have not read it yet.) The general discussion seemed less polarized than they have been, though I did see a fair share of hand-wringing and defending editorials pop up on my Google Alerts feed. I have speculated that I think anniversaries from this point forward will be somewhat more interesting and reflective than those in the recent past, in part because of the declining influence of American World War II veterans, who were such a strong force in the more recent ones. My (perhaps overly idealistic) hope is that our narratives of the bombings can settle into something more historically informed, more quietly reflective, and less keyed to contemporary politics than in the past.

For my part, I was impressed by the number of people online who were interested in re-creating Hiroshima on their hometowns. The featuring of NUKEMAP on the Washington Post's Wonkblog drove an incredible amount of traffic to the site. It was one of those stories that could be essentially lifted and re-written to fit a wide variety of different cities or countries, and there were variations of the "What would happen if Hiroshima happened here?" written in dozens of languages over the days leading up to and beyond the anniversary. The result is that NUKEMAP's traffic had an all-time high spike over 300,000 people on August 6. The traffic is a typical long-tail distribution, so in the week of August 5-12, there were well over 1 million pageviews for the NUKEMAP. There have been other spikes in the past, but none quite as big as this one.

Locations where the Little Boy bomb was "dropped," August 5-12, 2015. These are unweighted (each dot represents an indeterminate number of detonations). Here is a heatmap (capped at 1,000 detonations — the actual cap is 28,116 — to make it easier to see the broader spread) showing where repeat detonations occurred. Here is a version where I have thrown out all locations where fewer than 10 detonations took place, and scaled their size and color by repetition. Total detonations is 266,483.

Locations where the Little Boy bomb was "dropped," August 5-12, 2015. These are unweighted (each dot represents an indeterminate number of detonations). Here is a heatmap (capped at 1,000 detonations — the actual cap is 28,116 — to make it easier to see the broader spread) showing where repeat detonations occurred. Here is a version where I have thrown out all locations where fewer than 10 detonations took place, and scaled their size and color by repetition. Total detonations is 266,483.

Where do people nuke, when they recreate Hiroshima? Well, all over the world, not surprisingly, though the biggest single draws are New York (which is a NUKEMAP default if it cannot figure out where you probably live) and Hiroshima itself (re-creating the actual bombing). I've exported the log data for people using the Little Boy bomb setting (15 kiloton airbursts) for the week of August 5-12, and the maps are shown and linked to above. Obviously it correlates very heavily with both population and Internet access, but still, it is interesting.

Lastly, a week after the anniversary, what more reflection is there to be had? A new poll came out in late July of a thousand Americans, asking them what they thought about the bombings. Overall, 46% of those polled thought that the dropping of the bombs on Japan was the "right decision" to do, while 29% thought it was the "wrong decision," and 26% said they were "not sure." Which one can interpret in a number of ways. The feelings appear to correlate directly with age — the older you are, the more likely you think it was "right," and the younger, with "wrong." It also correlates with a few other factors, notably political affiliation (Republicans strongly in favor, Democrats and Independents not so much), race/ethnicity, and income. I suspect all of these variables (age, political affiliation, race/ethnicity, and income) to be pretty highly correlated in general. Separately, the gender gap is pretty extreme — men defend the bombings by a very large margin compared to women.

The head of the Nagasaki mushroom cloud — like a monstrous brain.

The head of the Nagasaki mushroom cloud — like a monstrous brain. Source: National Archives/Fold3.com.

None of this is extremely surprising, I don't think. But I was taken aback by another question in the same poll, a strictly factual one: "Which country was the first country to build a nuclear weapon?" Only 57% of the total polled correctly identified the United States, and it gets very depressing when one looks at how this breaks down by age. Less than half of Americans under the age of 45 could correctly identify that their country was the first country to develop nuclear weapons. I don't really mind if a lot of people can't identify when the first weapons were used (another question in the poll); exact years can be hard for people, especially on the spot, and the differences between the options given were not so vast that they represent much, in my view. But 23% were "not sure" who made the first bomb, 15% thought it was the USSR, and 3% thought it was China! (Almost nobody, alas, thought it was France.) This is not a minor factual error — it is a fundamental lack of knowledge about the historical composition of the world. It reflects, I suspect, the waning attention given to nuclear issues in the post-Cold War.

One last reflection: How do I, a historian of these matters, find myself thinking about Hiroshima and Nagasaki these days? Increasingly I find myself uninterested in the question of whether they were "justified" or not, which contain so much predictable posturing, the same old arguments, with very few new facts or analyses. I think the bombings were a very muddy affair from an ethical, strategic, and historical perspective, and I don't think they fit into any simplistic view of them. I've come to feel my position on these could be described as an "inverse moderate," where a moderate seeks to make everyone feel comfortable, but my goal is to make everyone feel uncomfortable. If you think this history supports some easy, straightforward interpretation, you are probably throwing out a lot of the data and filling it in with what you'd like to believe. It is complex history; it does not boil down easily.

Notes
  1. See Potsdam cable WAR 37683, July 24, 1945, copy in the Harrison-Bundy files, Roll 10, Target 10, Folder 64, "Interim Committee — Potsdam Cables." []
  2. And, of course, I am not so naive to believe that "getting the facts right" is a simple or straightforward process. Indeed, contextualization of documents is a large part of understanding what the "facts" often are, and that requires narrative and interpretation, and so we end up in a somewhat circular epistemological loop. But there is a difference between people outright getting them wrong and people who are at least trying to get them right. I have been frustrated to see the number of people who still claim that the US warned the Japanese before the atomic bombings, a myth perpetuated in no small part due to shoddy citation by archivists at the Truman Library on their website. []
Meditations

The trouble with airbursts

Friday, December 6th, 2013

Both the Little Boy and Fat Man atomic bombs were detonated high in the air above their target cities. That they did this was no accident — specialized circuitry, some invented just for the atomic bombs, was used so that the bombs could detect their height off of the ground and detonate at just the right moment. Little Boy detonated 1,968±50 feet above Hiroshima, Fat Man detonated 1,650±10 feet above Nagasaki. At least as early as the May 1945 Target Committee meeting at Los Alamos, "the criteria for determining height" of detonation had been agreed upon: the goal was to maximize the 5 psi (pounds-per-square-inch) overpressure blast radius of the bombs, with a knowledge that this was going to be a tricky thing since they weren't really sure how explosively large the bombs would be, and a bomb either too big or too large would reduce the total range of the 5 psi radius. At the time, they estimated Little Boy would be between 5 and 15 kilotons, Fat Man between 0.7 and 5 kilotons — obviously this was pre-"Trinity," which showed the Fat Man model could go at least up to 18-20 kilotons.

I was on the road quite a lot the last month, so I apologize about the radio silence for the past couple of weeks. But I'm happy to report to you that I managed to recently update the NUKEMAP's effects code in a way I've been meaning to for a long while: you can now set arbitrary heights for detonations. I thought I would explain a little bit about how that works, and why that matters, in today's post.

The 1962 edition of Glasstone and Dolan's The Effects of Nuclear Weapons and the Lovelace Foundation's "Nuclear Bomb Effects Computer."

The 1962 edition of Glasstone's The Effects of Nuclear Weapons and the Lovelace Foundation's "Nuclear Bomb Effects Computer."

Why did it take me so long to add a burst height feature? (A feature that, to both me and many others alike, was obviously lacking.) Much of the NUKEMAP's code is based on the calculations that went into making the famous Lovelace Foundation "Nuclear Bomb Effects Computer," which itself were based on equations in Samuel Glasstone's classic The Effects of Nuclear Weapons. This circular slide rule has some wonderful retro charm, and is a useful way of boiling down a lot of nuclear effects data into a simple analog "computer." However, like most nuclear effects calculations, it wasn't really designed with the kind of visualization that the NUKEMAP had in mind. For something like the NUKEMAP, one wants to be able to plug in a yield and a "desired" overpressure (such as 5 psi), and get a measurement of the ground range of the effect as a result. But this isn't how the Lovelace Computer works. Instead, you put in your kilotonnage and the distance you want to know the overpressure at, and in return you get a maximum overpressure in the form of pounds-per-square-inch. In other words, instead of asking, "what's the distance for 5 psi for a 15 kiloton surface burst?," you are only allowed to ask, "if I was 2 miles from a 15 kiloton surface burst, what would the overpressure be?"

For surface bursts and a few low height (400 feet and under) airbursts, the Lovelace Foundation did, in a separate report, provide equations of the sort useful for the NUKEMAP, and the NUKEMAP's code was originally based on these. But they didn't allow for anything fancy with regards to arbitrary-height airbursts. They let one look for pressure information at "optimal" airburst heights, but did not let one actually set a specific airburst height. For awhile I thought this might just have been a strange oversight, but the more I dug into the issue, I realized this was probably because the physics of airbursts is hard.

Grim geometry: calculating the ground range of the 500 rem radiation exposure radius for a Hiroshima-sized nuclear weapon set off at the height of the Hiroshima bomb. Most objects roughly to scale.

Grim geometry: calculating the ground range of the 500 rem radiation exposure radius for a Hiroshima-sized nuclear weapon set off at the height of the Hiroshima bomb. Most objects roughly to scale.

There are three immediate effects of nuclear weapons that the NUKEMAP models: thermal radiation (heat), ionizing radiation (radioactivity), and overpressure (blast). Thermal and ionizing radiation pretty much travels in a straight line, so if you know the slant-line distance for a given effect, it's no problem figuring out the ground distance at an arbitrary height through a simple application of the Pythagorean theorem, as shown above. The report the Lovelace Computer was based on allowed for the calculation of slant-line airburst distances for both of these, so that was a snap to implement. Somewhat interestingly, the ranges of the “interesting” thermal radiation categories (e.g. burns and burning) are so large that except with very high airbursts one often finds almost no difference between ground ranges computed using slant versus straight-line distances. Ionizing radiation, however, is relatively short in its effects, and so the height of the burst really does matter in practical terms for how much radiation the ground receives. This has a relevance to Hiroshima and Nagasaki that I will return to.

But this isn't how the physics of blast pressure works. The reason is somewhat subtle but important for understanding nuclear weapons targeting decisions. The pressure wave that emerges from the nuclear fireball does not stop when it hits the ground. Rather, it reflectsbounces upward again — like so:

Reflection of the shockwave of a 20 kiloton nuclear explosion exploded at 1,770 foot altitude. Via Wikipedia.

Reflection of the shockwave of a 20 kiloton nuclear explosion exploded at 1,770 foot altitude. Via Wikipedia.

You don't have to take my word for it (or Wikipedia's, for that matter) — you can actually see the reflection of the shockwave in some nuclear testing photography, like this photograph of Shot Grable, the "atomic cannon" test from 1953:

Shot Grable, Operation Upshot-Knothole — a 15 kiloton nuclear artillery shell detonated at an altitude of 524 feet.

Shot Grable, Operation Upshot-Knothole — a 15 kiloton nuclear artillery shell detonated at an altitude of 524 feet, with the reflection of the blast wave clearly visible under the fireball.

The initial blast wave is the "incident" or "primary" blast wave. The bounded wave in the "reflected" wave. When they touch, as shown in the Wikipedia diagram, they combine — which dramatically increases the overpressure at that location. So, referring the Wikipedia diagram again, by the time the primary shockwave was at the final radius of the diagram, it would have lost a considerable amount of energy. But when it merges with the reflected shockwave, it forms a single, vertical shock front known as the "Mach stem." In the diagram above, that has an overpressure of 15 psi — enough to destroy pretty significant buildings. If the shockwave did not work in this fashion, the primary shockwave would itself be considerably less than 15 psi at that point.

So the overall point here is that blast reflection can dramatically increase the blast pressure of the bomb at the point where it occurs. But the location at this point varies depending on the height of the bomb detonation — so you can use the choice of bomb detonation altitude to maximize certain pressures in particular. So this is what the Target Committee was talking about in May 1945: they wanted to maximize the radius of the 5 psi overpressure range, and they recognized that this involved finding the correct detonation height and knowing the correct yield of the bomb. They knew about the reflection property and in fact referred to the Mach stem explicitly in their discussion. Why 5 psi? Because that is the overpressure used to destroy "soft" targets like the relatively flimsy houses used by Japanese civilians, which they had already realized would be much easier to destroy than German-style houses.

For the NUKEMAP, this reflection made the modeling difficult. There are lots of models out there for calculating overpressure based on altitude, but they all do it similar to the Lovelace Foundation's "Computer": they tell you the maximum overpressure at a pre-specified point from ground zero. They don't let you ask, "where would the 5 psi radius be for a blast of 15 kilotons and a height of 1,968 feet?" Which was inconvenient for me. The data is out there, though — just not in computational form. Graphs of pressure ranges plotted on axes of ground range and burst height are quite common in the nuclear literature, where they are sometimes known as "knee curves" because of the characteristic "bulge" in ground range produced by the aforementioned Mach reflection, the spot where the pressure range dramatically enlarges. Glasstone and Dolan's 1977 Effects of Nuclear Weapons contains three of these graphs for pressure ranges between 10,000 and 1 psi. Here is the "low-pressure" graph showing the characteristic "knees":

Glasstone and Dolan Fig 3-73c - Peak overpressures

Reading these is fairly straightforward once you understand what they show. If you want to maximize the 2 psi pressure range, find the point at which the "2 psi" curve is as far to the right as possible. Then look at the vertical axis to find what the corresponding height of burst is. Or, if you want to know what the pressure will be on the ground at a given distance from a bomb detonated at a given burst height, simply figure out which pressure regions that point is between on the graph. The graphs are always given for 1 kiloton bursts, but scaling from these to arbitrary detonations (with the caveat that very high and very low yields can sometimes be a little different) is pretty straightforward according to the scaling laws given in the text.

I searched high and low for a computational solution to the airburst question, without much luck. I had attempted to do polynomial curve fits on the graphs above, and just found them to be too irregular — the equations I was able to produce made huge errors, and splitting them up into sub-curves produced a mathematical mess. The only other computational solution I found was someone else who had done curve fits and also come up with equations that produced relatively large errors. I wasn't happy with this. I discussed my frustrations with a few people (let me do a shout out to Edward Geist, currently a Stanton Fellow at the Rand Corporation, who has been doing his own modeling work regarding Soviet nuclear effects handbooks, and to Alex Montgomery at Reed College, both of whom were extremely helpful as people to talk to about this), and gradually came to the conclusion that there probably wasn't an obvious analytical solution to this problem. So I did the next-best thing, which was to take samples of all of the curve values (less tedious than it sounds because of a little script I whipped up for the job) and just set up some tables of data that could then be sifted through very quickly by the computer. In other words, the way the NUKEMAP's code works is pretty much the Javascript equivalent to consulting the graphs in Glasstone and Dolan's book — it treats it as a simple interpolation problem between known values. Which turns out to give results which are no worse than those involved with using the book itself:

The NUKEMAP's overpressure data, graphed using R. Point samples are represented by circles, lines connect given pressure ranges. Color corresponds (logarithmically) with pressure ranges from 1 to 10,000 psi.

The NUKEMAP's overpressure data, graphed using R. Point samples are represented by circles, lines connect given pressure ranges. Color corresponds (logarithmically) with pressure ranges from 1 to 10,000 psi. Unknown points on the graph are interpolated between known values.

The end result is that now the NUKEMAP can do arbitrary-burst height airbursts. In fact, the NUKEMAP pressure model goes all the way up to 10,000 psi — a pressure zone equivalent to being 4 miles under the ocean. Yow.

With this data in hand, and the NUKEMAP model, let's go back to the Hiroshima and Nagasaki question. They knew about the Mach reflection, they knew about the height of the burst. It's not clear that their assumptions for how this would work would line up exactly with those in Glasstone and Dolan, since those were modified according to actual empirical experience with airbursts in the kiloton range, something that they did not have on hand in 1945, even if they intuited much of the physics behind it. What can we say about their knowledge, and their choices, with regards to what they actually did with selecting the blast heights?

The Hiroshima yield has been calculated as about 15 kilotons, and the Nagasaki yield was about 21 kilotons. According to the Glasstone and Dolan model, to optimize the 5 psi pressure range for each, you'd want a burst height of ~2,500 feet for Little Boy and ~2,800 feet for Fat Man. Those are significantly higher altitudes than the actual detonation heights of 1,968 and 1,650 feet. The Target Committee meeting shows that they were assuming that 2,400 feet was the correct height for a 15 kiloton bomb — which is about right. Which means either than the detonating circuitry fired late (not impossible though I haven't seen it mentioned), or they changed their blast range criteria (for a 15 kiloton bomb, 1,940 feet maximizes the 9 psi radius rather than the 5 psi radius), or that they were being very conservative about the yields (a 1,960 feet burst height corresponds with maximizing the 5 psi radius of a 7 kiloton burst, whereas 1,700 feet corresponds to a 5 kiloton burst). My guess is that the latter was what was going on — they were being very conservative about the yield.

The net result is that at both Hiroshima and Nagasaki, you had lower burst heights than were optimal. The effect on the ground is that while the 5 psi blast radius didn't go quite as far out as it might have ideally, the range of radiation effects and radiation around Ground Zero was significantly increased, and the maximum overpressures around Ground Zero were substantially higher. Overall, it is interesting to see that they were apparently, even after Trinity, still being pretty un-optimistic regarding the explosive yields of the bombs, calibrating their burst heights to half or even one quarter of what the actual blasts were. For a "soft" targets, like Hiroshima and Nagasaki, this doesn't matter too much, as long as the fireball is above the altitude which produces local fallout, but for a "hard" target, where the goal is to put a lot of pressure in one spot, this would be a serious miscalculation.

Meditations | Visions

What the NUKEMAP taught me about fallout

Friday, August 2nd, 2013

One of the most technically difficult aspects of the new NUKEMAP was the fallout generation code. I know that in practice it looks like just a bunch of not-too-complicated ellipses, but finding a fallout code that would provide what I considered to be necessary flexibility proved to be a very long search indeed. I had started working on it sometime in 2012, got frustrated, returned to it periodically, got frustrated again, and finally found the model I eventually used — Carl Miller's Simplified Fallout Scaling System — only a few months ago.

The sorts of contours the Miller model produces.

The sorts of contours the Miller scaling model produces.

The fallout model used is what is known as a "scaling" model. This is in contrast with what Miller terms a "mathematical" model, which is a much more complicated beast. A scaling model lets you input only a few simple parameters (e.g. warhead yield, fission fraction, and wind speed) and the output are the kinds of idealized contours seen in the NUKEMAP. This model, obviously, doesn't quite look like the complexities of real life, but as a rough indication of the type of radioactive contamination expected, and over what kind of area, it has its uses. The mathematical model is the sort that requires much more complicated wind parameters (such as the various wind speeds and sheers at different altitudes) and tries to do something that looks more "realistic."

The mathematical models are harder to get ahold of (the government has a few of them, but they don't release them to non-government types like me) and require more computational power (so instead of running in less than a second, they require several minutes even on a modern machine). If I had one, I would probably try to implement it, but I don't totally regret using the scaling model. In terms of communicating both the general technical point about fallout, and in the fact that this is an idealized model, it does very well. I would prefer people to look at a model and have no illusions that it is, indeed, just a model, as opposed to some kind of simulation whose slickness might engender false confidence.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963. Click to zoom.

Working on the fallout model, though, made me realize how little I really understood about nuclear fallout. I mean, my general understanding was still right, but I had a few subtle-but-important revelations that changed the way I thought about nuclear exchanges in general.

The most important one is that fallout is primary a product of surface bursts. That is, the chief determinant as to whether there is local fallout or not is whether the nuclear fireball touches the ground. Airbursts where the fireball doesn't touch the ground don't really produce fallout worth talking about — even if they are very large.

I read this in numerous fallout models and effects books and thought, can this be right? What's the ground got to do with it? A whole lot, apparently. The nuclear fireball is full of highly-radioactive fission products. For airbursts, the cloud goes pretty much straight up and those particles are light enough and hot enough that they pretty much just hang out at the top of the cloud. By the time they start to cool and drag enough to "fall out" of the cloud, they have diffused themselves in the atmosphere and also decayed quite a bit.1 So they are basically not an issue for people on the ground — you end up with exposures in the tenths or hundreds of rads, which isn't exactly nothing but is pretty low. This is more or less what they found at Hiroshima and Nagasaki — there were a few places where fallout had deposited, but it was extremely limited and very low radiation, as you'd expect with those two airbursts.

I thought this might be simplifying things a bit, so I looked up the fallout patterns for airbursts. And you know what? It seems to be correct. The radiation pattern you get from a "nominal" fission airburst looks more or less like this:

The on-side dose rate contours for the Buster-Jangle "Easy" shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr).

The on-side dose rate contours for the Buster-Jangle "Easy" shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr). Source.

That's not zero radiation, but as you can see it is very, very local, and relatively limited. The radiation deposited is about the same range as the acute effects of the bomb itself, as opposed to something that affects people miles downwind.2

What about very large nuclear weapons? The only obvious US test that fit the bill here was Redwing Cherokee, from 1956. This was the first thermonuclear airdrop by the USA, and it had a total yield of 3.8 megatons — nothing to sniff at, and a fairly high percentage of it (at least 50%) from fission. But, sure enough, appears to have been basically no fallout pattern as a result. A survey meter some 100 miles from ground-zero picked up a two-hour peak of .25 millirems per hour some 10 hours later — which is really nothing to worry about. The final report on the test series concluded that Cherokee produced "no fallout of military significance" (all the more impressive given how "dirty" many of the other tests in that series were). Again, not truly zero radiation, but pretty close to it, and all the more impressive given the megatonnage involved.3

Redwing Cherokee: big boom, but almost no fallout.

Redwing Cherokee: quite a big boom, but almost no fallout.

The case of the surface burst is really quite different. When the fireball touches the ground, it ends up mixing the fission products with dirt and debris. (Or, in the case of testing in the Marshall Islands, coral.) The dirt and debris breaks into fine chunks, but it is heavy. These heavier particles fall out of the cloud very quickly, starting at about an hour after detonation and then continuing for the next 96 hours or so. And as they fall out, they are both attached to the nasty fission products and have other induced radioactivity as well. This is the fallout we're used to from the big H-bomb tests in the Pacific (multi-megaton surface bursts on coral atolls was the worst possible combination possible for fallout) and even the smaller surface bursts in Nevada.

The other thing the new model helped me appreciate more is exactly how much the fission fraction matters. The fission fraction is the amount of the total yield that is derived from fission, as opposed to fusion. Fission is the only reaction that produces  highly-radioactive byproducts. Fusion reactions produce neutrons, which are a definite short-term threat, but not so much a long-term concern. Obviously all "atomic" or fission bombs have a fission fraction of 100%, but for thermonuclear weapons it can vary quite a bit. I've talked about this in a recent post, so I won't go into detail here, but just emphasize that it was unintuitive to me that the 50 Mt Tsar Bomba, had it been a surface burst, would have had much less fallout than the 15 Mt Castle Bravo shot, because the latter had some 67% of its energy derived from fission while the former had only 3%. Playing with the NUKEMAP makes this fairly clear:

Fallout comparisons

The darkest orange here corresponds to 1,000 rads/hr (a deadly dose); the slightly darker orange is 100 rads/hr (an unsafe dose); the next lighter orange is 10 rads/hr (ill-advised), the lightest yellow is 1 rad/hr (not such a big deal). So the 50 Mt Tsar Bomba is entirely within the "unsafe" range, as compared to the large "deadly" areas of the other two. Background location chosen only for scale!

The real relevance of all of this for understanding nuclear war is fairly important. Weapons that are designed to flatten cities, perhaps surprisingly, don't really pose as much of a long-term fallout hazard. The reason for this is that the ideal burst height for such a weapon is usually set to maximize the 10 psi pressure radius, and that is always fairly high above the ground. (The maximum radius for a pressure wave is somewhat unintuitive because it relies on how the wave will be reflected on the ground. So it doesn't produce a straightforward curve.) Bad for the people in the cities themselves, to be sure, but not such a problem for those downwind.

But weapons that are designed to destroy command bunkers, or missiles in silos, are the worst for the surrounding civilian populations. This is because such weapons are designed to penetrate the ground, and the fireballs necessarily come into contact with the dirt and debris. As a result, they kick up the worst sort of fallout that can stretch many hundreds of miles downwind.

So it's sort of a damned-if-you-do, damned-if-you-don't sort of situation when it comes to nuclear targeting. If you try to do the humane thing by only targeting counterforce targets, you end up producing the worst sort of long-range, long-term radioactive hazard. The only way to avoid that is to target cities — which isn't exactly humane either. (And, of course, the idealized terrorist nuclear weapon manages to combine the worst aspects of both: targeting civilians and kicking up a lot of fallout, for lack of a better delivery vehicle.)

A rather wonderful 1970s fallout exposure diagram. Source.

A rather wonderful 1970s fallout exposure diagram. Source.

And it is worth noting: fallout mitigation is one of those areas were Civil Defense is worth paying attention to. You can't avoid all contamination by staying in a fallout shelter for a few days, but you can avoid the worst, most acute aspects of it. This is what the Department of Homeland Security has been trying to convince people of, regarding a possible terrorist nuclear weapon. They estimate that hundreds of thousands of lives could be saved in such an event, if people understood fallout better and acted upon it. But the level of actual compliance with such recommendations (stay put, don't flee immediately) seems like it would be rather low to me.

In some sense, this made me feel even worse about fallout than I had before. Prior to playing around with the details, I'd assumed that fallout was just a regular result of such weapons. But now I see it more as underscoring the damnable irony of the bomb: that all of the choices it offers up to you are bad ones.

Notes
  1. Blasts low enough to form a stem do suck up some dirt into the cloud, but it happens later in the detonation when the fission products have cooled and condensed a bit, and so doesn't matter as much. []
  2. Underwater surface bursts, like Crossroads Baker, have their own characteristics, because the water seems to cause the fallout to come down almost immediately. So the distances are not too different from the airburst pattern here — that is, very local — but the contours are much, much more radioactive. []
  3. Why didn't they test more of these big bombs as airdrops, then? Because their priority was on the experimentation and instrumentation, not the fallout. Airbursts were more logistically tricky, in other words, and were harder to get data from. Chew on that one a bit... []