Posts Tagged ‘Nuclear fallout’


Mapping the US nuclear war plan for 1956

Monday, May 9th, 2016

A few months back, the National Security Archive made national headlines when they released a 1956 US target list they had obtained under the Freedom of Information Act. The target list outlined over a thousand Strategic Air Command nuclear targets in the Soviet Union, Eastern Bloc, the People’s Republic of China, and North Korea. The Archive had posted a small graphic of the ones in Eastern Europe, but hadn’t digitized the full list. Several weeks ago, the people at the Future of Life Institute did just this, digitizing the complete dataset — no small task, given that these were spread over several hundred, non-OCR-able pages of smudgy, 60-year-old government documents.1

A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

A sampling of the 1956 target list obtained by the National Security Archive. The digits encode latitude and longitude points, among other bits of information.

I recently attended a conference that the FLI put on regarding nuclear war. FLI was co-founded by the MIT physicist Max Tegmark and his wife Meia (among a few others), both of whom I was glad I got to spend some time with, as they are interesting, intelligent people with interesting histories. They are interested in promoting work that decreases existential threats to the human race, which they see as possibly including things like nuclear war and nuclear winter, but also unhampered artificial intelligence, climate change, and the possible negative futures of biotechnology. These are all, of course, controversial topics (not always controversial among the same groups of people, to be sure). They’re an interesting group, and they are stirring up some interesting discussions, which I think is an unambiguously positive thing even if you don’t agree that all of these things are equally realistic threats, or threats on the same level.2

The FLI's digitized version of the target list. Click the image to view their interactive version.

The FLI’s digitized version of the target list. Click the image to view their interactive version.

The target list, mapped out as the FLI did above, is already pretty impressive. While I was at the conference, I got the idea that it wouldn’t be that hard to reconfigure a few parts of the NUKEMAP code to allow me to import huge numbers of target lists in the right format. NUKEMAP already supports the targeting of multiple nukes (the feature is a little cryptic — you create a detonation, then click “launch multiple,” then move the cursor and can then create another one, and repeat as necessary), but it didn’t have any automatic way of importing a large number of settings. Once I had done that, I then thought, what would it look like if I used realistic weather data to determine the fallout patterns from surface bursts? It only took a little bit of further work to write a script that can poll OpenWeatherMap‘s public API and grab information about real-time wind speed and direction information about any given set of coordinates.3 This renders quite an impressive image, though to do this for some 1,154 targets requires a lot of RAM (about 1.5 GB) and a fast computer. So it’s not something one wants to necessarily do all the time.

I have captured the results as a series of interactive screenshots, to better save you (and your web browser) the trouble of trying to render these yourself. You can see how changing the yield dramatically changes the fallout (assuming surface bursts, of course). The interactive viewer is available by clicking the image below, or this link.4

Screenshot of my interactive viewer for the nuclear war plan. Click to view.

Screenshot of my interactive viewer for the nuclear war plan. Click to view.

I also sampled weather data from a few days in a row, to see what differences it made from a practical standpoint. It is remarkable how different wind speed and direction can vary from day to day. In some of these “simulations,” Copenhagen, Denmark, avoids fallout. In others, it does not. Under some weather conditions (and yield selections), northern Japan gets some fallout from an attack on the Soviet-controlled Kuril Islands; in others, it does not. The NUKEMAP’s fallout estimator is, of course, a very simplified model, but even with that you can get a sense of how much difference a shift in the winds can make.

Having done that, I started to wonder: what would the casualties of such an attack look like? I don’t have population density data of the relevant areas from 1956 that has sufficient granularity to be used with my normal NUKEMAP casualty estimating script, but I figured that even the present-day population figures would be interesting. If you try to query the casualty database with over a thousand targets it just says “no,” so I wrote another script that would query it target-by-target and tally the results.

The results were a bit staggering. I mean, I assumed it would be a large number. But they are really large numbers. Some of this is because the casualty script is double-counting “victims” when they are inside the relevant blast areas of multiple detonations. At the moment, there’s no easy way around that (even for a small number of detonations, keeping track of who is already “dead” would require a lot of time and processing power, and to do it on the scale of a thousand is just not possible with the way it is set up currently).

An example of an area where a lot of "double-counting" is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill...

An example of an area where a lot of “double-counting” is taking place — St. Petersburg. The circles show various pressure rings for 1 Mt weapons, which are used by NUKEMAP to calculate casualties. Maybe just a little overkill…

On the other hand, the casualty estimate does not take into account fallout-related casualties, or the long-term casualties caused by the destruction of so much infrastructure. The target list also doesn’t tell us how many targets were, in fact, targeted redundantly with multiple weapons — the idea that it might have been “one nuke, one target” is definitely an incorrect one. Even before World War II had completely ended, US planners for nuclear war against the Soviet Union understood that not every bomb would make it to a target, and so planned for multiple weapons to be targeted on each. So “double-killing” those people in some of these locations is probably not so wrong. It likely isn’t all that crazy to think of these numbers as back-of-the-envelope estimates for what would result if you waged this kind of attack today (which is not to imply that the US would necessarily do such a thing). But I don’t want anyone to think I am implying any kind of real certainty here. I would, in fact, be dubious of anyone, at any time, implying a lot of certainty about these kinds of things, because we (fortunately) lack very much first-hand experience with this kind of “data,” outside of the results at Hiroshima and Nagasaki, which were in many ways particular to their time and place.

Casualty figures, of course, require making assumptions about the size of the nuclear weapons used, as well as the fuzing settings (airbursts generate far less downwind fallout in comparison to surface bursts, but they can greatly increase the casualties for people in civilian structures). For 1956, there would have been a “mix” of yields and types of weapons. We don’t have data on that to my knowledge. As a simplifying assumption, I just ran the casualty calculation with a number of yields, and with both surface burst and airbursts (optimized to increase the range of the 5 psi blast area) options. For the sake of space and avoiding the appearance of false precision, I have rounded them to their nearest million below:

surface burst airburst
injuries fatalities injuries fatalities
10 Mt 259 239 517 304
5 Mt 210 171 412 230
1 Mt 120 70 239 111
500 kt 89 46 185 77
100 kt 39 16 94 30
50 kt 25 10 66 19

At first I thought some of these numbers just seemed fantastical. Russia today only has a population of 140 million or so. How could we get up to numbers so high? Some of this is, again, because of double-counting, especially with the very big bomb — if you run a 10 Mt bomb on Moscow kills 5.5 million people, and injures 4 million, by NUKEMAP’s estimate, which combined is 70% of the 13 million people in the area of the 1 psi blast radius of such a weapon. (If that seems high, remember that a 10 Mt bomb goes well outside the city of Moscow itself — the Great Moscow Metro Region is about 16 million people total.) Since a large number of nukes were targeted around Moscow, that’s a lot of double counting, especially when you use them with such high-yield weapons.

So the very-big numbers I would take with a very hefty grain of salt. NUKEMAP’s casualty estimator really isn’t meant for guessing multiple, overlapping damage areas. At best, it attempts to give back-of-the-envelope estimates for single detonations. Separately, the US arsenal at the time was around 10,000 megatons worth of destructive power. So they obviously couldn’t have been (and wouldn’t have been) all multi-megaton monsters. But, all the same, I don’t think it’s at all improbable that the multi-megaton monsters that were in the arsenal would have been targeted at heavily populated regions, like Moscow. Especially given the fact that, again, there would have been multiple nukes aimed at each target.

I also thought it would be interesting to take the casualties and break them apart by region. Here’s where I found some really startling results, using a 1 Megaton (1,000 kiloton) airburst as my “model” detonation, again in millions:

injuries fatalities
Soviet Union 111 55
Warsaw Pact 23 10
China + North Korea 104 46
239 111

To make this point more clearly: 820 of the 1,154 targets were inside the Soviet Union proper. They are responsible for 48% of the casualties in the above scenario. Non-Soviet countries in the Warsaw Pact (Eastern Europe, more or less), were responsible for “only” 188 of the targets, and 9% of the casualties. China and North Korea had only 146 of the targets, but were accountable for 43% of the casualties. Which is to say, each “detonation” in the USSR on average produced around 203,000 casualties on average, each one in Eastern Europe around 176,000, and each one in Asia is over 1 million. That’s kind of bananas.

Now, these use modern (2011) population density figures, not those of 1956. But it’s still a pretty striking result. Why would this be? Partially because the Asian targets seem to be primarily in large cities. Many of the Soviet targets, by contrast, are of pretty isolated areas — remote military airfields in some cases — that only kill a few hundred people. It would make for a very interesting study to really get into the “weeds” of this target plan, and to sort out — systematically — what exactly was being targeted in each location, as best as we can. If we did that, we’d possibly be able to guess at whether an airburst or a surface burst was called for, and potentially even be able to judge target priorities, though the “bomb-as-you-go” method of attack used in the 1950s probably means that even low-priority targets would get nuked early on if they were on a path to a higher-priority one.

Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

Total megatonnage of the US nuclear stockpile — nearly 10 gigatons by 1956, climbing to a peak of over 20 gigatons in 1959. Source: US Department of Energy

What does this exercise tell us? Two things, in my mind. One, this 1956 target list is pretty nuts, especially given the high-yield characteristics of the US nuclear stockpile in 1956. This strikes me as going a bit beyond mere deterrence, the consequence of letting military planners have just a little bit too much freedom in determining what absolutely had to have a nuclear weapon placed on it.

The second is to reiterate how amazing it is that this got declassified in the first place. When I had heard about it originally, I was pretty surprised. The US government usually considered target information to be pretty classified, even when it is kind of obvious (we target Russian nuclear missile silos? You don’t say…). The reason, of course, is that if you can go very closely over a target list, you can “debug” the mind of the nuclear strategist who made it — what they thought was important, what they knew, and what they would do about their knowledge. Though times have changed a lot since 1956, a lot of those assumptions are probably still at least partially valid today, so they tend to keep that sort of thing under wraps. These NUKEMAP “experiments” are quick and cheap approaches to making sense of this new information, and as the creator of the NUKEMAP, let me say that I think “quick and cheap” is meant as a compliment. To analyze something quickly and cheaply is to spark new ideas quickly and cheaply, and you can always subject your new ideas to more careful analytical scrutiny once you’ve had them. I hope that someone in the future will give this target data some real careful attention, because I have no doubt that it still contains many insights and surprises.

  1. Because there has been some confusion about what this list is, I want to clarify a bit here. It is a “Weapons Requirements Study,” which is to say, it’s the way in which the US Air Force Strategic Air Command said, “here are all the things we might want to nuke, if we could.” The might and if we could parts are important, because they are what makes this difference from an actual war plan, which is to say, “what we would actually do in the event of a nuclear war.” The might means that not necessarily all of these targets would have been nuked in any given war situation, but indicates the sorts of things that they considered to be valid targets. The if we could means that this would require more weapons than they could afford to use at the time. In 1956, the US stockpile contained “only” 3,692 warheads. This target list is meant to imply that it needed to be bigger, that is, that by 1959 they would want more weapons to be produced. So by 1959 they had 12,298 weapons — more than three times as many. Why so many weapons for the same number of targets? Because, as noted in the post below, the idea of one-nuke, one-target isn’t how they planned it. Anyway, the long and short of it is, this isn’t exactly the same thing as a war plan, much less for 1956. It may over-count, but it also probably under-counts (because it ignores tactical use, targets of opportunity, the overkill that would occur when targets were multiple-targeted, etc.). But it does give you a flavor of the war planning that was going on, and is probably closer to that than any other document that has been released for this time. As for how that would affect what would have happened in 1956, it’s hard to say, but this is in line with many of the other things we know about nuclear war planning at that time, so I think it is a fair illustration. []
  2. I think my students were probably the most happy that FLI digitized all of this target data because if they hadn’t, I was going to force my undergrads who take my data visualization course to do it in the name of a practical example of what “crowdsourcing” can mean. []
  3. In some cases, OpenWeatherMap did not have information about some of the coordinates. In such cases, the script averaged the missing point from several surrounding points, weighting them by distance. The results it gives in doing this seem plausible enough. For each time I ran it, there were only about two or three missing pieces of data. []
  4. For those who want to look at the dataset themselves, the CSV file that the visualization uses is available here. []

Castle Bravo at 60

Friday, February 28th, 2014

Tomorrow, March 1, 2014, is the 60th anniversary of the Castle Bravo nuclear test. I’ve written about it several times before, but I figured a discussion of why Bravo matters was always welcome. Bravo was the first test of a deliverable hydrogen bomb by the United States, proving that you could not only make nuclear weapons that had explosive yields a thousand times more powerful than the Hiroshima bomb, but that you could make them in small-enough packages that they could fit onto airplanes. It is was what truly inaugurated the megaton age (more so than the first H-bomb test, Ivy Mike, which was explosively large but still in a bulky, experimental form). As a technical demonstration it would be historically important even if nothing else had happened.

One of the early Bravo fallout contours. Source.

One of the early Castle Bravo fallout contours showing accumulated doses. Source.

But nobody says something like that unless other things — terrible things — did happen. Two things went wrong. The first is that the bomb was even more explosive than the scientists thought it was going to be. Instead of 6 megatons of yield, it produced 15 megatons of yield, an error of 250%, which matters when you are talking about millions of tons of TNT. The technical error, in retrospect, reveals how grasping their knowledge still was: the bomb contained two isotopes of lithium in the fusion component of the design, and the designers assumed only one of them would be reactive, but they were wrong. The second problem is that the wind changed. Instead of carrying the copious radioactive fallout that such a weapon would produce over the open ocean, where it would be relatively harmless, it instead carried it over inhabited atolls in the Marshall Islands. This necessitated evacuation, long-term health monitoring, and produced terrible long-term health outcomes for many of the people on those islands.

If it had just been natives who were exposed, the Atomic Energy Commission might have been able to keep things hushed up for awhile — but it wasn’t. A Japanese fishing boat, ironically named the Fortunate Dragon, drifted into the fallout plume as well and returned home sick and with a cargo of radioactive tuna. One of the fishermen later died (whether that was because of the fallout exposure or because of the treatment regime is apparently still a controversial point). It became a major site of diplomatic incident between Japan, who resented once again having the distinction of having been irradiated by the United States, and this meant that Bravo became extremely public. Suddenly the United States was, for the first time, admitting it had the capability to make multi-megaton weapons. Suddenly it was having to release information about long-distance, long-term contamination. Suddenly fallout was in the public mind — and its popular culture manifestations (Godzilla, On the Beach) soon followed.

Map showing points (X) where contaminated fish were caught or where the sea was found to be unusually radioactive, following the Castle Bravo nuclear test.

Map showing points (X) where contaminated fish were caught or where the sea was found to be unusually radioactive, following the Castle Bravo nuclear test. This sort of thing gets public attention.

But it’s not just the public who started thinking about fallout differently. The Atomic Energy Commission wasn’t new to the idea of fallout — they had measured the plume from the Trinity test in 1945, and knew that ground bursts produced radioactive debris.

So you’d think that they’d have made lots of fallout studies prior to Castle. I had thought about producing some kind of map with all of the various fallout plumes through the 1950s superimposed on it, but it became harder than I thought — there are just a lot fewer fallout plumes prior to Bravo than you might expect. Why? Because prior to Bravo, they generally did not map downwind fallout plumes for shots in Marshall Islands — they only mapped upwind plumes. So you get results like this for Ivy Mike, a very “dirty” 10.4 megaton explosion that did produce copious fallout, but you’d never know it from this map:

Fallout from the 1952 "Ivy Mike" shot of the first hydrogen bomb. Note that this is actually the "back" of the fallout plume (the wind was blowing it north over open sea), and they didn't have any kind of radiological monitoring set up to see how far it went. As a result, this makes it look far more local than it was in reality. This is from a report I had originally found in the Marshall Islands database.

To make it even more clear what you’re looking at here: the wind in this shot was blowing north — so most of the fallout went north. But they only mapped the fallout that went south, a tiny amount of the total fallout. So it looks much, much more contained than it was in reality. You want to shake these guys, retrospectively.

It’s not that they didn’t know that fallout went further downwind. They had mapped the Trinity test’s long-range fallout in some detail, and starting with Operation Buster (1951) they had started mapping downwind plumes for lots of tests that took place at the Nevada Test Site. But for ocean shots, they didn’t their logistics together, because, you know, the ocean is big. Such is one of the terrible ironies of Bravo: we know its downwind fallout plume well because it went over (inhabited) land, and otherwise they probably wouldn’t have bothered measuring it.

The publicity given to Bravo meant that its fallout plume got wide, wide dissemination — unlike the Trinity test’s plume, unlike the other ones they were creating. In fact, as I mentioned before, there were a few “competing” drawings of the fallout cloud circulating internally, because fallout extrapolation is non-trivially difficult:

BRAVO fallout contours produced by the AFSWP, NRDL, and RAND Corp. Source.

But once these sorts of things were part of the public discourse, it was easy to start imposing them onto other contexts beyond islands in the Pacific Ocean. They were superimposed on the Eastern Seaboard, of course. They became a stock trope for talking about what nuclear war was going to do to the country if it happened. The term “fallout,” which was not used even by the government scientists as a noun until around 1948,1 suddenly took off in popular usage:

Google Ngram chart of the usage of the word "fallout" in English language books and periodicals. Source.

Google Ngram chart of the usage of the word “fallout” in English language books and periodicals. Source.

The significance of fallout is that it threatens and contaminates vast areas — far more vast than the areas immediately affected by the bombs themselves. It means that even a large-scale nuclear attack that tries to only threaten military sites is also going to do both short-term and long-term damage to civilian populations. (As if anyone really considered just attacking military sites, though; everything I have read suggests that this kind of counter-force strategy was never implemented by the US government even if it was talked about.)

It meant that there was little escaping the consequences of a large nuclear exchange. Sure, there are a few blank areas on maps like this one, but think of all the people, all the cities, all the industries that are within the blackened areas of the map:

Oak Ridge National Laboratory estimate of "accumulated 14-day fallout dose patterns from a hypothetical attack on the United States," 1986. I would note that these are very high exposures and I'm a little skeptical of them, but in any case, it represents the kind of messages that were being given on this issue. Source.

Oak Ridge National Laboratory estimate of “accumulated 14-day fallout dose patterns from a hypothetical attack on the United States,” 1986. I would note that these are very high exposures and I’m a little skeptical of them, but in any case, it represents the kind of messages that were being given on this issue. Source.

Bravo inaugurated a new awareness of nuclear danger, and arguably, a new era of actual danger itself, when the weapons got big, radiologically “dirty,” and contaminating. Today they are much smaller, though still dirty and contaminating.

I can’t help but feel, though, that while transporting the Bravo-like fallout patterns to other countries is a good way to get a sense of their size and importance, that it still misses something. I recently saw this video that Scott Carson posted to his Twitter account of a young Marshallese woman eloquently expressing her rage about the contamination of her homeland, at the fact that people were more concerned about the exposure of goats and pigs to nuclear effects than they were the islanders:

I’ve spent a lot of time looking at the reports of the long-term health effects on the Marshallese people. It is always presented as a cold, hard science — sometimes even as a “benefit” to the people exposed (hey, they got free health care for life). Here’s how the accident was initially discussed in a closed session of the Congressional Joint Committee on Atomic Energy, for example:

Chairman Cole: “I understand even after they [the natives of Rongelap] are taken back you plan to have medical people in attendance.”

Dr. Bugher: “I think we will have to have a continuing study program for an indefinite time.”

Rep. James Van Zandt: “The natives ought to benefit — they got a couple of good baths.”

Which is a pretty sick way to talk about an accident like this, even if all of the facts aren’t in yet. Even for a classified hearing.

What’s the legacy of Bravo, then? For most of us, it was a portent of dangers to come, a peak into the dark dealings that the arms race was developing. But for the people on those islands, it meant that “the Marshall Islands” would always be followed by “where the United States tested 67 nuclear weapons” and a terrible story about technical hubris, radioactive contamination, and long-term health problems. I imagine that people from these islands and people who grew up near Chernobyl probably have similar, terrible conversations.

A medical inspection of a Marshallese woman by an American doctor. "Project 4," the biomedical effects program of Operation Castle was initially to be concerned with "mainly neutron dosimetry with mice" but after the accident an additional group, Project 4.1, was added to study the long-term exposure effects in human beings — the Marshallese. Image source.

A medical inspection of a Marshallese woman by an American doctor. “Project 4,” the biomedical effects program of Operation Castle was initially planned to be concerned with “mainly neutron dosimetry with mice” but after the accident an additional group, Project 4.1, was added to study the long-term exposure effects in human beings — the Marshallese. Image source.

I get why the people who made and tested the bombs did what they did, what their priorities were, what they thought hung in the balance. But I also get why people would find their actions a terrible thing. I have seen people say, in a flip way, that there were “necessary sacrifices” for the security that the bomb is supposed to have brought the world. That may be so — though I think one should consult the “sacrifices” in question before passing that judgment. But however one thinks of it, one must acknowledge that the costs were high.

  1. William R. Kennedy, Jr., “Fallout Forecasting—1945 through 1962,” LA-10605-MS (March 1986), on 5. []
Meditations | Visions

What the NUKEMAP taught me about fallout

Friday, August 2nd, 2013

One of the most technically difficult aspects of the new NUKEMAP was the fallout generation code. I know that in practice it looks like just a bunch of not-too-complicated ellipses, but finding a fallout code that would provide what I considered to be necessary flexibility proved to be a very long search indeed. I had started working on it sometime in 2012, got frustrated, returned to it periodically, got frustrated again, and finally found the model I eventually used — Carl Miller’s Simplified Fallout Scaling System — only a few months ago.

The sorts of contours the Miller model produces.

The sorts of contours the Miller scaling model produces.

The fallout model used is what is known as a “scaling” model. This is in contrast with what Miller terms a “mathematical” model, which is a much more complicated beast. A scaling model lets you input only a few simple parameters (e.g. warhead yield, fission fraction, and wind speed) and the output are the kinds of idealized contours seen in the NUKEMAP. This model, obviously, doesn’t quite look like the complexities of real life, but as a rough indication of the type of radioactive contamination expected, and over what kind of area, it has its uses. The mathematical model is the sort that requires much more complicated wind parameters (such as the various wind speeds and sheers at different altitudes) and tries to do something that looks more “realistic.”

The mathematical models are harder to get ahold of (the government has a few of them, but they don’t release them to non-government types like me) and require more computational power (so instead of running in less than a second, they require several minutes even on a modern machine). If I had one, I would probably try to implement it, but I don’t totally regret using the scaling model. In terms of communicating both the general technical point about fallout, and in the fact that this is an idealized model, it does very well. I would prefer people to look at a model and have no illusions that it is, indeed, just a model, as opposed to some kind of simulation whose slickness might engender false confidence.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963. Click to zoom.

Working on the fallout model, though, made me realize how little I really understood about nuclear fallout. I mean, my general understanding was still right, but I had a few subtle-but-important revelations that changed the way I thought about nuclear exchanges in general.

The most important one is that fallout is primary a product of surface bursts. That is, the chief determinant as to whether there is local fallout or not is whether the nuclear fireball touches the ground. Airbursts where the fireball doesn’t touch the ground don’t really produce fallout worth talking about — even if they are very large.

I read this in numerous fallout models and effects books and thought, can this be right? What’s the ground got to do with it? A whole lot, apparently. The nuclear fireball is full of highly-radioactive fission products. For airbursts, the cloud goes pretty much straight up and those particles are light enough and hot enough that they pretty much just hang out at the top of the cloud. By the time they start to cool and drag enough to “fall out” of the cloud, they have diffused themselves in the atmosphere and also decayed quite a bit.1 So they are basically not an issue for people on the ground — you end up with exposures in the tenths or hundreds of rads, which isn’t exactly nothing but is pretty low. This is more or less what they found at Hiroshima and Nagasaki — there were a few places where fallout had deposited, but it was extremely limited and very low radiation, as you’d expect with those two airbursts.

I thought this might be simplifying things a bit, so I looked up the fallout patterns for airbursts. And you know what? It seems to be correct. The radiation pattern you get from a “nominal” fission airburst looks more or less like this:

The on-side dose rate contours for the Buster-Jangle "Easy" shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr).

The on-side dose rate contours for the Buster-Jangle “Easy” shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr). Source.

That’s not zero radiation, but as you can see it is very, very local, and relatively limited. The radiation deposited is about the same range as the acute effects of the bomb itself, as opposed to something that affects people miles downwind.2

What about very large nuclear weapons? The only obvious US test that fit the bill here was Redwing Cherokee, from 1956. This was the first thermonuclear airdrop by the USA, and it had a total yield of 3.8 megatons — nothing to sniff at, and a fairly high percentage of it (at least 50%) from fission. But, sure enough, appears to have been basically no fallout pattern as a result. A survey meter some 100 miles from ground-zero picked up a two-hour peak of .25 millirems per hour some 10 hours later — which is really nothing to worry about. The final report on the test series concluded that Cherokee produced “no fallout of military significance” (all the more impressive given how “dirty” many of the other tests in that series were). Again, not truly zero radiation, but pretty close to it, and all the more impressive given the megatonnage involved.3

Redwing Cherokee: big boom, but almost no fallout.

Redwing Cherokee: quite a big boom, but almost no fallout.

The case of the surface burst is really quite different. When the fireball touches the ground, it ends up mixing the fission products with dirt and debris. (Or, in the case of testing in the Marshall Islands, coral.) The dirt and debris breaks into fine chunks, but it is heavy. These heavier particles fall out of the cloud very quickly, starting at about an hour after detonation and then continuing for the next 96 hours or so. And as they fall out, they are both attached to the nasty fission products and have other induced radioactivity as well. This is the fallout we’re used to from the big H-bomb tests in the Pacific (multi-megaton surface bursts on coral atolls was the worst possible combination possible for fallout) and even the smaller surface bursts in Nevada.

The other thing the new model helped me appreciate more is exactly how much the fission fraction matters. The fission fraction is the amount of the total yield that is derived from fission, as opposed to fusion. Fission is the only reaction that produces  highly-radioactive byproducts. Fusion reactions produce neutrons, which are a definite short-term threat, but not so much a long-term concern. Obviously all “atomic” or fission bombs have a fission fraction of 100%, but for thermonuclear weapons it can vary quite a bit. I’ve talked about this in a recent post, so I won’t go into detail here, but just emphasize that it was unintuitive to me that the 50 Mt Tsar Bomba, had it been a surface burst, would have had much less fallout than the 15 Mt Castle Bravo shot, because the latter had some 67% of its energy derived from fission while the former had only 3%. Playing with the NUKEMAP makes this fairly clear:

Fallout comparisons

The darkest orange here corresponds to 1,000 rads/hr (a deadly dose); the slightly darker orange is 100 rads/hr (an unsafe dose); the next lighter orange is 10 rads/hr (ill-advised), the lightest yellow is 1 rad/hr (not such a big deal). So the 50 Mt Tsar Bomba is entirely within the “unsafe” range, as compared to the large “deadly” areas of the other two. Background location chosen only for scale!

The real relevance of all of this for understanding nuclear war is fairly important. Weapons that are designed to flatten cities, perhaps surprisingly, don’t really pose as much of a long-term fallout hazard. The reason for this is that the ideal burst height for such a weapon is usually set to maximize the 10 psi pressure radius, and that is always fairly high above the ground. (The maximum radius for a pressure wave is somewhat unintuitive because it relies on how the wave will be reflected on the ground. So it doesn’t produce a straightforward curve.) Bad for the people in the cities themselves, to be sure, but not such a problem for those downwind.

But weapons that are designed to destroy command bunkers, or missiles in silos, are the worst for the surrounding civilian populations. This is because such weapons are designed to penetrate the ground, and the fireballs necessarily come into contact with the dirt and debris. As a result, they kick up the worst sort of fallout that can stretch many hundreds of miles downwind.

So it’s sort of a damned-if-you-do, damned-if-you-don’t sort of situation when it comes to nuclear targeting. If you try to do the humane thing by only targeting counterforce targets, you end up producing the worst sort of long-range, long-term radioactive hazard. The only way to avoid that is to target cities — which isn’t exactly humane either. (And, of course, the idealized terrorist nuclear weapon manages to combine the worst aspects of both: targeting civilians and kicking up a lot of fallout, for lack of a better delivery vehicle.)

A rather wonderful 1970s fallout exposure diagram. Source.

A rather wonderful 1970s fallout exposure diagram. Source.

And it is worth noting: fallout mitigation is one of those areas were Civil Defense is worth paying attention to. You can’t avoid all contamination by staying in a fallout shelter for a few days, but you can avoid the worst, most acute aspects of it. This is what the Department of Homeland Security has been trying to convince people of, regarding a possible terrorist nuclear weapon. They estimate that hundreds of thousands of lives could be saved in such an event, if people understood fallout better and acted upon it. But the level of actual compliance with such recommendations (stay put, don’t flee immediately) seems like it would be rather low to me.

In some sense, this made me feel even worse about fallout than I had before. Prior to playing around with the details, I’d assumed that fallout was just a regular result of such weapons. But now I see it more as underscoring the damnable irony of the bomb: that all of the choices it offers up to you are bad ones.

  1. Blasts low enough to form a stem do suck up some dirt into the cloud, but it happens later in the detonation when the fission products have cooled and condensed a bit, and so doesn’t matter as much. []
  2. Underwater surface bursts, like Crossroads Baker, have their own characteristics, because the water seems to cause the fallout to come down almost immediately. So the distances are not too different from the airburst pattern here — that is, very local — but the contours are much, much more radioactive. []
  3. Why didn’t they test more of these big bombs as airdrops, then? Because their priority was on the experimentation and instrumentation, not the fallout. Airbursts were more logistically tricky, in other words, and were harder to get data from. Chew on that one a bit… []
News and Notes | Visions

The new NUKEMAP is coming

Friday, July 12th, 2013

I’m excited to announce, that after a long development period, that the new NUKEMAP is going to debut on Thursday, July 18th, 2013. There will be an event to launch it, hosted by the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies in downtown Washington, DC, from 10-11:30 am, where I will talk about what it can do, why I’ve done it, and give a demonstration of how it works. Shortly after that, the whole thing will go live for the entire world.

Nukemap preview - fallout

Radioactive fallout dose contours from a 2.3 megaton surface burst centered on Washington, DC, assuming a 15 mph wind and 50% yield from fission. Colors correspond to 1, 10, 100, and 1,000 rads-per-hour at 1 hour. This detonation is modeled after the Soviet weapons in play during the Cuban Missile Crisis.

I don’t want to spill all of the beans early, but here’s a teaser. There is not just one new NUKEMAP. There are two new NUKEMAPs. One of them is a massive overhaul of the back-end of the old NUKEMAP, with much more flexible effects calculations and the ability to chart all sorts of other new phenomena — like radioactive fallout (finally!), casualty estimates, and the ability to specify airbursts versus ground bursts. All of these calculations are based on models developed by people working for the US government during the Cold War for use in government effects planning. So you will have a lot of data at your instant disposal, should you want it, but all within the smooth, easy-t0-use NUKEMAP interface you know and love.

This has been a long time in development, and has involved me chasing down ancient government reports, learning how to interpret their equations, and converting them to Javascript and the Google Maps API. So you can imagine how “fun” (read: not fun) that was, and how Beautiful Mind my office and home got in the process. And as you’ve no doubt noticed in the last few weeks, doing obsessive, detailed, mathematical technical work in secret all week did not give me a lot of inspiration for historical blog posts! So I’ll be glad to move on from this, and to get it out in the light of day. (Because unlike the actual government planners, my work isn’t classified.)

Above is an image from the report which I used to develop the fallout model. Getting a readable copy of this involved digging up an original copy at the National Library of Medicine, because the versions available in government digital databases were too messed up to reliably read the equations. Some fun: none of this was set up for easy translation into a computer, because nobody had computers in the 1960s. So it was designed to help you draw these by hand, which  made translating them into Javascript all the more enjoyable. More fun: many of these old reports had at least one typo hidden in their equations that I had to ferret out. Well, perhaps that was for the best — I feel I truly grok what these equations are doing at this point and have a lot more confidence in them than the old NUKEMAP scaling models (which, by the way, are actually not that different in their radii than the new equations, for all of their simplifications).

But the other NUKEMAP is something entirely new. Entirely different. Something, arguably, without as much historical precedent — because people today have more calculation and visualization power at their fingertips than ever before. It’s one thing for people to have the tools to map the bomb in two dimensions. There were, of course, even websites before the NUKEMAP that allowed you to do that to one degree or another. But I’ve found that, even as much as something like the NUKEMAP allows you to visualize the effects of the bomb on places you know, there was something still missing. People, myself included, were still having trouble wrapping their heads around what it would really look like for something like this to happen. And while thinking about ways to address this, I stumbled across a new approach. I’ll go into it more next week, but here’s a tiny teaser screenshot to give you a bit of an indication of what I’m getting about.

Nukemap preview

That’s the cloud from a 10 kiloton blast — the same yield as the North Korean’s 2013 test, and the model the US government uses for a terrorist nuclear weapon — on mid-town Manhattan, as viewed from New York harbor. Gives you a healthy respect for even a “small” nuclear weapon. And this is only part of what’s coming.

Much more next week. July 18th, 2013 — two days after the 68th-anniversary of the Trinity test — the new NUKEMAPs are coming. Tell your friends, and stay tuned.


Enough Fallout for Everyone

Friday, August 3rd, 2012

Nuclear fallout is an incredible thing. As if the initial, prompt effects of a nuclear bomb weren’t bad enough — take that and then spread out a plume of radioactive contamination. The Castle BRAVO accident was the event that really brought this to the public forefront. I mean, the initial effects of 15 megaton explosion are pretty stunning in and of themselves:

But the fallout plume extended for hundreds of miles:

Why yes, you can get this on a coffee mug!

Superimposed on an unfamiliar atoll, it’s hard to get a sense of how long that plume is. Put it on the American Northeast, though, and it’s pretty, well, awesome, in the original sense of the word:

Of course, it’s all about which direction the wind blows, in the end.

And remember… that’s just a single bomb!

Of course, if you’re interested in the more diffuse amounts of radioactivity — more than just the stuff that you know is probably bad for you — the fallout maps get even more interesting. Here’s what the BRAVO fallout did over the next month or so after the detonation:1

Now, you can’t see the numbers there, but they aren’t high — it’s not the same as being immediately downwind of these things. They’re low numbers… but they’re non-zero. But one of the “special” things about nuclear contaminants is that you can track them for a very long time, and see exactly how one test — or accident — in a remote area is intimately connected to the entire rest of the planet. 

And, in fact, nearly everyone born during the era of atmospheric nuclear testing had some tiny bits of fallout in their bones — you can even use it to determine how old a set of teeth are, to a very high degree of accuracy, by measuring their fallout content. (And before you think atmospheric testing is a matter of ancient history, remember that France and China both tested atmospheric nuclear weapons long after the Limited Test Ban Treaty! The last atmospheric test, by China, was in 1980!)

The same sorts of maps are used to show the dispersion of radioactive byproducts of nuclear reactors when accidents occur. I find these things sort of hypnotizing. Here are four “frames” from a simulation run by Lawrence Livermore National Laboratory on their ARAC computer showing the dispersion of radioactivity after the Chernobyl accident in 1986:2

Chernobyl ARAC simulation, day 2

Chernobyl ARAC simulation, day 4

Chernobyl ARAC simulation, day 6

Chernobyl ARAC simulation, day 10

Pretty incredible, no? Now, the odds are that there are lots of other contaminants that, could we track them, would show similar world-wide effects. Nuclear may not be unique in the fact that it has global reach — though the concentrations of radioactivity are far higher than you’d find anywhere else — but it may be unique that you can always measure it. 

Yesterday I saw a new set of plots predicting the dispersion of Caesium-137 after the Fukushima accident from 2011. These are just models, not based on measurements; and all models have their issues, as the modelers at the Centre d’Enseignement et de Recherche en Environnement Atmosphérique (CEREA) who produced these plots acknowledge.

Here is their map for Cs-137 deposition after Fukushima. I’m not sure what the numbers really mean, health-wise, but the long reach of the accident is dramatic:

Map of ground deposition of caesium-137 for the Fukushima-Daichii accident

Map of ground deposition of caesium-137 for the Fukushima-Daichii accident by Victor Winiarek, Marc Bocquet, Yelva Roustan, Camille Birman, and Pierre Tran at CEREA. (Source)

Compare with Chernobyl. (Warning: the scales of these two images are different, so the colors don’t map onto the same values. This is kind of annoying and makes it hard to compare them, though it illustrates well the local effects of Chernobyl as compared to Fukushima.)

Map of ground deposition of caesium-137 for the Chernobyl accident

Map of ground deposition of caesium-137 for the Chernobyl accident, by Victor Winiarek, Marc Bocquet, Yelva Roustan, Camille Birman, and Pierre Tran at CEREA. (Source)

Lastly, they have an amazing animated map showing the plume as it expands across the Pacific. It’s about 5MB in size, and a Flash SWF, so I’m just going to link to it here. But you must check it out — it’s hypnotic, strangely beautiful, and disturbing. Here is a very stop-motion GIF version derived from their map, just to give you an incentive to see the real thing, which is much more impressive:

Fukushima-Daichii activity in the air (caesium-137, ground level) (animated)

There’s plenty of fallout for everyone — well enough to go around. No need to be stingy. And nearly seven decades into the nuclear age, there’s a little bit of fallout in everyone, too.

Update: The CEREA site seems to be struggling a bit. Here’s a locally-hosted version of the full animation. I’ll remove this when CEREA gets up and running again…

  1. Image from “Nature of Radioactive Fall-Out and Its Effects on Man, Part 1,” Hearings of the Joint Committee on Atomic Energy, Special Joint Subcommittee on Radiation (May 27-29 and June 3, 1957), on 169. []
  2. These images are courtesy of the DOE Digital Archive. []