Posts Tagged ‘Nuclear fallout’

Meditations

Castle Bravo at 60

Friday, February 28th, 2014

Tomorrow, March 1, 2014, is the 60th anniversary of the Castle Bravo nuclear test. I’ve written about it several times before, but I figured a discussion of why Bravo matters was always welcome. Bravo was the first test of a deliverable hydrogen bomb by the United States, proving that you could not only make nuclear weapons that had explosive yields a thousand times more powerful than the Hiroshima bomb, but that you could make them in small-enough packages that they could fit onto airplanes. It is was what truly inaugurated the megaton age (more so than the first H-bomb test, Ivy Mike, which was explosively large but still in a bulky, experimental form). As a technical demonstration it would be historically important even if nothing else had happened.

One of the early Bravo fallout contours. Source.

One of the early Castle Bravo fallout contours showing accumulated doses. Source.

But nobody says something like that unless other things — terrible things — did happen. Two things went wrong. The first is that the bomb was even more explosive than the scientists thought it was going to be. Instead of 6 megatons of yield, it produced 15 megatons of yield, an error of 250%, which matters when you are talking about millions of tons of TNT. The technical error, in retrospect, reveals how grasping their knowledge still was: the bomb contained two isotopes of lithium in the fusion component of the design, and the designers assumed only one of them would be reactive, but they were wrong. The second problem is that the wind changed. Instead of carrying the copious radioactive fallout that such a weapon would produce over the open ocean, where it would be relatively harmless, it instead carried it over inhabited atolls in the Marshall Islands. This necessitated evacuation, long-term health monitoring, and produced terrible long-term health outcomes for many of the people on those islands.

If it had just been natives who were exposed, the Atomic Energy Commission might have been able to keep things hushed up for awhile — but it wasn’t. A Japanese fishing boat, ironically named the Fortunate Dragon, drifted into the fallout plume as well and returned home sick and with a cargo of radioactive tuna. One of the fishermen later died (whether that was because of the fallout exposure or because of the treatment regime is apparently still a controversial point). It became a major site of diplomatic incident between Japan, who resented once again having the distinction of having been irradiated by the United States, and this meant that Bravo became extremely public. Suddenly the United States was, for the first time, admitting it had the capability to make multi-megaton weapons. Suddenly it was having to release information about long-distance, long-term contamination. Suddenly fallout was in the public mind — and its popular culture manifestations (Godzilla, On the Beach) soon followed.

Map showing points (X) where contaminated fish were caught or where the sea was found to be unusually radioactive, following the Castle Bravo nuclear test.

Map showing points (X) where contaminated fish were caught or where the sea was found to be unusually radioactive, following the Castle Bravo nuclear test. This sort of thing gets public attention.

But it’s not just the public who started thinking about fallout differently. The Atomic Energy Commission wasn’t new to the idea of fallout — they had measured the plume from the Trinity test in 1945, and knew that ground bursts produced radioactive debris.

So you’d think that they’d have made lots of fallout studies prior to Castle. I had thought about producing some kind of map with all of the various fallout plumes through the 1950s superimposed on it, but it became harder than I thought — there are just a lot fewer fallout plumes prior to Bravo than you might expect. Why? Because prior to Bravo, they generally did not map downwind fallout plumes for shots in Marshall Islands — they only mapped upwind plumes. So you get results like this for Ivy Mike, a very “dirty” 10.4 megaton explosion that did produce copious fallout, but you’d never know it from this map:

Fallout from the 1952 "Ivy Mike" shot of the first hydrogen bomb. Note that this is actually the "back" of the fallout plume (the wind was blowing it north over open sea), and they didn't have any kind of radiological monitoring set up to see how far it went. As a result, this makes it look far more local than it was in reality. This is from a report I had originally found in the Marshall Islands database.

To make it even more clear what you’re looking at here: the wind in this shot was blowing north — so most of the fallout went north. But they only mapped the fallout that went south, a tiny amount of the total fallout. So it looks much, much more contained than it was in reality. You want to shake these guys, retrospectively.

It’s not that they didn’t know that fallout went further downwind. They had mapped the Trinity test’s long-range fallout in some detail, and starting with Operation Buster (1951) they had started mapping downwind plumes for lots of tests that took place at the Nevada Test Site. But for ocean shots, they didn’t their logistics together, because, you know, the ocean is big. Such is one of the terrible ironies of Bravo: we know its downwind fallout plume well because it went over (inhabited) land, and otherwise they probably wouldn’t have bothered measuring it.

The publicity given to Bravo meant that its fallout plume got wide, wide dissemination — unlike the Trinity test’s plume, unlike the other ones they were creating. In fact, as I mentioned before, there were a few “competing” drawings of the fallout cloud circulating internally, because fallout extrapolation is non-trivially difficult:

BRAVO fallout contours produced by the AFSWP, NRDL, and RAND Corp. Source.

But once these sorts of things were part of the public discourse, it was easy to start imposing them onto other contexts beyond islands in the Pacific Ocean. They were superimposed on the Eastern Seaboard, of course. They became a stock trope for talking about what nuclear war was going to do to the country if it happened. The term “fallout,” which was not used even by the government scientists as a noun until around 1948,1 suddenly took off in popular usage:

Google Ngram chart of the usage of the word "fallout" in English language books and periodicals. Source.

Google Ngram chart of the usage of the word “fallout” in English language books and periodicals. Source.

The significance of fallout is that it threatens and contaminates vast areas — far more vast than the areas immediately affected by the bombs themselves. It means that even a large-scale nuclear attack that tries to only threaten military sites is also going to do both short-term and long-term damage to civilian populations. (As if anyone really considered just attacking military sites, though; everything I have read suggests that this kind of counter-force strategy was never implemented by the US government even if it was talked about.)

It meant that there was little escaping the consequences of a large nuclear exchange. Sure, there are a few blank areas on maps like this one, but think of all the people, all the cities, all the industries that are within the blackened areas of the map:

Oak Ridge National Laboratory estimate of "accumulated 14-day fallout dose patterns from a hypothetical attack on the United States," 1986. I would note that these are very high exposures and I'm a little skeptical of them, but in any case, it represents the kind of messages that were being given on this issue. Source.

Oak Ridge National Laboratory estimate of “accumulated 14-day fallout dose patterns from a hypothetical attack on the United States,” 1986. I would note that these are very high exposures and I’m a little skeptical of them, but in any case, it represents the kind of messages that were being given on this issue. Source.

Bravo inaugurated a new awareness of nuclear danger, and arguably, a new era of actual danger itself, when the weapons got big, radiologically “dirty,” and contaminating. Today they are much smaller, though still dirty and contaminating.

I can’t help but feel, though, that while transporting the Bravo-like fallout patterns to other countries is a good way to get a sense of their size and importance, that it still misses something. I recently saw this video that Scott Carson posted to his Twitter account of a young Marshallese woman eloquently expressing her rage about the contamination of her homeland, at the fact that people were more concerned about the exposure of goats and pigs to nuclear effects than they were the islanders:

I’ve spent a lot of time looking at the reports of the long-term health effects on the Marshallese people. It is always presented as a cold, hard science — sometimes even as a “benefit” to the people exposed (hey, they got free health care for life). Here’s how the accident was initially discussed in a closed session of the Congressional Joint Committee on Atomic Energy, for example:

Chairman Cole: “I understand even after they [the natives of Rongelap] are taken back you plan to have medical people in attendance.”

Dr. Bugher: “I think we will have to have a continuing study program for an indefinite time.”

Rep. James Van Zandt: “The natives ought to benefit — they got a couple of good baths.”

Which is a pretty sick way to talk about an accident like this, even if all of the facts aren’t in yet. Even for a classified hearing.

What’s the legacy of Bravo, then? For most of us, it was a portent of dangers to come, a peak into the dark dealings that the arms race was developing. But for the people on those islands, it meant that “the Marshall Islands” would always be followed by “where the United States tested 67 nuclear weapons” and a terrible story about technical hubris, radioactive contamination, and long-term health problems. I imagine that people from these islands and people who grew up near Chernobyl probably have similar, terrible conversations.

A medical inspection of a Marshallese woman by an American doctor. "Project 4," the biomedical effects program of Operation Castle was initially to be concerned with "mainly neutron dosimetry with mice" but after the accident an additional group, Project 4.1, was added to study the long-term exposure effects in human beings — the Marshallese. Image source.

A medical inspection of a Marshallese woman by an American doctor. “Project 4,” the biomedical effects program of Operation Castle was initially planned to be concerned with “mainly neutron dosimetry with mice” but after the accident an additional group, Project 4.1, was added to study the long-term exposure effects in human beings — the Marshallese. Image source.

I get why the people who made and tested the bombs did what they did, what their priorities were, what they thought hung in the balance. But I also get why people would find their actions a terrible thing. I have seen people say, in a flip way, that there were “necessary sacrifices” for the security that the bomb is supposed to have brought the world. That may be so — though I think one should consult the “sacrifices” in question before passing that judgment. But however one thinks of it, one must acknowledge that the costs were high.

Notes
  1. William R. Kennedy, Jr., “Fallout Forecasting—1945 through 1962,” LA-10605-MS (March 1986), on 5. []
Meditations | Visions

What the NUKEMAP taught me about fallout

Friday, August 2nd, 2013

One of the most technically difficult aspects of the new NUKEMAP was the fallout generation code. I know that in practice it looks like just a bunch of not-too-complicated ellipses, but finding a fallout code that would provide what I considered to be necessary flexibility proved to be a very long search indeed. I had started working on it sometime in 2012, got frustrated, returned to it periodically, got frustrated again, and finally found the model I eventually used — Carl Miller’s Simplified Fallout Scaling System — only a few months ago.

The sorts of contours the Miller model produces.

The sorts of contours the Miller scaling model produces.

The fallout model used is what is known as a “scaling” model. This is in contrast with what Miller terms a “mathematical” model, which is a much more complicated beast. A scaling model lets you input only a few simple parameters (e.g. warhead yield, fission fraction, and wind speed) and the output are the kinds of idealized contours seen in the NUKEMAP. This model, obviously, doesn’t quite look like the complexities of real life, but as a rough indication of the type of radioactive contamination expected, and over what kind of area, it has its uses. The mathematical model is the sort that requires much more complicated wind parameters (such as the various wind speeds and sheers at different altitudes) and tries to do something that looks more “realistic.”

The mathematical models are harder to get ahold of (the government has a few of them, but they don’t release them to non-government types like me) and require more computational power (so instead of running in less than a second, they require several minutes even on a modern machine). If I had one, I would probably try to implement it, but I don’t totally regret using the scaling model. In terms of communicating both the general technical point about fallout, and in the fact that this is an idealized model, it does very well. I would prefer people to look at a model and have no illusions that it is, indeed, just a model, as opposed to some kind of simulation whose slickness might engender false confidence.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963.

Fallout from a total nuclear exchange, in watercolors. From the Saturday Evening Post, March 23, 1963. Click to zoom.

Working on the fallout model, though, made me realize how little I really understood about nuclear fallout. I mean, my general understanding was still right, but I had a few subtle-but-important revelations that changed the way I thought about nuclear exchanges in general.

The most important one is that fallout is primary a product of surface bursts. That is, the chief determinant as to whether there is local fallout or not is whether the nuclear fireball touches the ground. Airbursts where the fireball doesn’t touch the ground don’t really produce fallout worth talking about — even if they are very large.

I read this in numerous fallout models and effects books and thought, can this be right? What’s the ground got to do with it? A whole lot, apparently. The nuclear fireball is full of highly-radioactive fission products. For airbursts, the cloud goes pretty much straight up and those particles are light enough and hot enough that they pretty much just hang out at the top of the cloud. By the time they start to cool and drag enough to “fall out” of the cloud, they have diffused themselves in the atmosphere and also decayed quite a bit.1 So they are basically not an issue for people on the ground — you end up with exposures in the tenths or hundreds of rads, which isn’t exactly nothing but is pretty low. This is more or less what they found at Hiroshima and Nagasaki — there were a few places where fallout had deposited, but it was extremely limited and very low radiation, as you’d expect with those two airbursts.

I thought this might be simplifying things a bit, so I looked up the fallout patterns for airbursts. And you know what? It seems to be correct. The radiation pattern you get from a “nominal” fission airburst looks more or less like this:

The on-side dose rate contours for the Buster-Jangle "Easy" shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr).

The on-side dose rate contours for the Buster-Jangle “Easy” shot (31 kilotons), in rads per hour. Notice that barely any radiation goes further than 1,100 yards from ground zero, and that even that is very low level (2 rads/hr). Source.

That’s not zero radiation, but as you can see it is very, very local, and relatively limited. The radiation deposited is about the same range as the acute effects of the bomb itself, as opposed to something that affects people miles downwind.2

What about very large nuclear weapons? The only obvious US test that fit the bill here was Redwing Cherokee, from 1956. This was the first thermonuclear airdrop by the USA, and it had a total yield of 3.8 megatons — nothing to sniff at, and a fairly high percentage of it (at least 50%) from fission. But, sure enough, appears to have been basically no fallout pattern as a result. A survey meter some 100 miles from ground-zero picked up a two-hour peak of .25 millirems per hour some 10 hours later — which is really nothing to worry about. The final report on the test series concluded that Cherokee produced “no fallout of military significance” (all the more impressive given how “dirty” many of the other tests in that series were). Again, not truly zero radiation, but pretty close to it, and all the more impressive given the megatonnage involved.3

Redwing Cherokee: big boom, but almost no fallout.

Redwing Cherokee: quite a big boom, but almost no fallout.

The case of the surface burst is really quite different. When the fireball touches the ground, it ends up mixing the fission products with dirt and debris. (Or, in the case of testing in the Marshall Islands, coral.) The dirt and debris breaks into fine chunks, but it is heavy. These heavier particles fall out of the cloud very quickly, starting at about an hour after detonation and then continuing for the next 96 hours or so. And as they fall out, they are both attached to the nasty fission products and have other induced radioactivity as well. This is the fallout we’re used to from the big H-bomb tests in the Pacific (multi-megaton surface bursts on coral atolls was the worst possible combination possible for fallout) and even the smaller surface bursts in Nevada.

The other thing the new model helped me appreciate more is exactly how much the fission fraction matters. The fission fraction is the amount of the total yield that is derived from fission, as opposed to fusion. Fission is the only reaction that produces  highly-radioactive byproducts. Fusion reactions produce neutrons, which are a definite short-term threat, but not so much a long-term concern. Obviously all “atomic” or fission bombs have a fission fraction of 100%, but for thermonuclear weapons it can vary quite a bit. I’ve talked about this in a recent post, so I won’t go into detail here, but just emphasize that it was unintuitive to me that the 50 Mt Tsar Bomba, had it been a surface burst, would have had much less fallout than the 15 Mt Castle Bravo shot, because the latter had some 67% of its energy derived from fission while the former had only 3%. Playing with the NUKEMAP makes this fairly clear:

Fallout comparisons

The darkest orange here corresponds to 1,000 rads/hr (a deadly dose); the slightly darker orange is 100 rads/hr (an unsafe dose); the next lighter orange is 10 rads/hr (ill-advised), the lightest yellow is 1 rad/hr (not such a big deal). So the 50 Mt Tsar Bomba is entirely within the “unsafe” range, as compared to the large “deadly” areas of the other two. Background location chosen only for scale!

The real relevance of all of this for understanding nuclear war is fairly important. Weapons that are designed to flatten cities, perhaps surprisingly, don’t really pose as much of a long-term fallout hazard. The reason for this is that the ideal burst height for such a weapon is usually set to maximize the 10 psi pressure radius, and that is always fairly high above the ground. (The maximum radius for a pressure wave is somewhat unintuitive because it relies on how the wave will be reflected on the ground. So it doesn’t produce a straightforward curve.) Bad for the people in the cities themselves, to be sure, but not such a problem for those downwind.

But weapons that are designed to destroy command bunkers, or missiles in silos, are the worst for the surrounding civilian populations. This is because such weapons are designed to penetrate the ground, and the fireballs necessarily come into contact with the dirt and debris. As a result, they kick up the worst sort of fallout that can stretch many hundreds of miles downwind.

So it’s sort of a damned-if-you-do, damned-if-you-don’t sort of situation when it comes to nuclear targeting. If you try to do the humane thing by only targeting counterforce targets, you end up producing the worst sort of long-range, long-term radioactive hazard. The only way to avoid that is to target cities — which isn’t exactly humane either. (And, of course, the idealized terrorist nuclear weapon manages to combine the worst aspects of both: targeting civilians and kicking up a lot of fallout, for lack of a better delivery vehicle.)

A rather wonderful 1970s fallout exposure diagram. Source.

A rather wonderful 1970s fallout exposure diagram. Source.

And it is worth noting: fallout mitigation is one of those areas were Civil Defense is worth paying attention to. You can’t avoid all contamination by staying in a fallout shelter for a few days, but you can avoid the worst, most acute aspects of it. This is what the Department of Homeland Security has been trying to convince people of, regarding a possible terrorist nuclear weapon. They estimate that hundreds of thousands of lives could be saved in such an event, if people understood fallout better and acted upon it. But the level of actual compliance with such recommendations (stay put, don’t flee immediately) seems like it would be rather low to me.

In some sense, this made me feel even worse about fallout than I had before. Prior to playing around with the details, I’d assumed that fallout was just a regular result of such weapons. But now I see it more as underscoring the damnable irony of the bomb: that all of the choices it offers up to you are bad ones.

Notes
  1. Blasts low enough to form a stem do suck up some dirt into the cloud, but it happens later in the detonation when the fission products have cooled and condensed a bit, and so doesn’t matter as much. []
  2. Underwater surface bursts, like Crossroads Baker, have their own characteristics, because the water seems to cause the fallout to come down almost immediately. So the distances are not too different from the airburst pattern here — that is, very local — but the contours are much, much more radioactive. []
  3. Why didn’t they test more of these big bombs as airdrops, then? Because their priority was on the experimentation and instrumentation, not the fallout. Airbursts were more logistically tricky, in other words, and were harder to get data from. Chew on that one a bit… []
News and Notes | Visions

The new NUKEMAP is coming

Friday, July 12th, 2013

I’m excited to announce, that after a long development period, that the new NUKEMAP is going to debut on Thursday, July 18th, 2013. There will be an event to launch it, hosted by the James Martin Center for Nonproliferation Studies of the Monterey Institute of International Studies in downtown Washington, DC, from 10-11:30 am, where I will talk about what it can do, why I’ve done it, and give a demonstration of how it works. Shortly after that, the whole thing will go live for the entire world.

Nukemap preview - fallout

Radioactive fallout dose contours from a 2.3 megaton surface burst centered on Washington, DC, assuming a 15 mph wind and 50% yield from fission. Colors correspond to 1, 10, 100, and 1,000 rads-per-hour at 1 hour. This detonation is modeled after the Soviet weapons in play during the Cuban Missile Crisis.

I don’t want to spill all of the beans early, but here’s a teaser. There is not just one new NUKEMAP. There are two new NUKEMAPs. One of them is a massive overhaul of the back-end of the old NUKEMAP, with much more flexible effects calculations and the ability to chart all sorts of other new phenomena — like radioactive fallout (finally!), casualty estimates, and the ability to specify airbursts versus ground bursts. All of these calculations are based on models developed by people working for the US government during the Cold War for use in government effects planning. So you will have a lot of data at your instant disposal, should you want it, but all within the smooth, easy-t0-use NUKEMAP interface you know and love.

This has been a long time in development, and has involved me chasing down ancient government reports, learning how to interpret their equations, and converting them to Javascript and the Google Maps API. So you can imagine how “fun” (read: not fun) that was, and how Beautiful Mind my office and home got in the process. And as you’ve no doubt noticed in the last few weeks, doing obsessive, detailed, mathematical technical work in secret all week did not give me a lot of inspiration for historical blog posts! So I’ll be glad to move on from this, and to get it out in the light of day. (Because unlike the actual government planners, my work isn’t classified.)

Above is an image from the report which I used to develop the fallout model. Getting a readable copy of this involved digging up an original copy at the National Library of Medicine, because the versions available in government digital databases were too messed up to reliably read the equations. Some fun: none of this was set up for easy translation into a computer, because nobody had computers in the 1960s. So it was designed to help you draw these by hand, which  made translating them into Javascript all the more enjoyable. More fun: many of these old reports had at least one typo hidden in their equations that I had to ferret out. Well, perhaps that was for the best — I feel I truly grok what these equations are doing at this point and have a lot more confidence in them than the old NUKEMAP scaling models (which, by the way, are actually not that different in their radii than the new equations, for all of their simplifications).

But the other NUKEMAP is something entirely new. Entirely different. Something, arguably, without as much historical precedent — because people today have more calculation and visualization power at their fingertips than ever before. It’s one thing for people to have the tools to map the bomb in two dimensions. There were, of course, even websites before the NUKEMAP that allowed you to do that to one degree or another. But I’ve found that, even as much as something like the NUKEMAP allows you to visualize the effects of the bomb on places you know, there was something still missing. People, myself included, were still having trouble wrapping their heads around what it would really look like for something like this to happen. And while thinking about ways to address this, I stumbled across a new approach. I’ll go into it more next week, but here’s a tiny teaser screenshot to give you a bit of an indication of what I’m getting about.

Nukemap preview

That’s the cloud from a 10 kiloton blast — the same yield as the North Korean’s 2013 test, and the model the US government uses for a terrorist nuclear weapon — on mid-town Manhattan, as viewed from New York harbor. Gives you a healthy respect for even a “small” nuclear weapon. And this is only part of what’s coming.

Much more next week. July 18th, 2013 — two days after the 68th-anniversary of the Trinity test — the new NUKEMAPs are coming. Tell your friends, and stay tuned.

Visions

Enough Fallout for Everyone

Friday, August 3rd, 2012

Nuclear fallout is an incredible thing. As if the initial, prompt effects of a nuclear bomb weren’t bad enough — take that and then spread out a plume of radioactive contamination. The Castle BRAVO accident was the event that really brought this to the public forefront. I mean, the initial effects of 15 megaton explosion are pretty stunning in and of themselves:

But the fallout plume extended for hundreds of miles:

Why yes, you can get this on a coffee mug!

Superimposed on an unfamiliar atoll, it’s hard to get a sense of how long that plume is. Put it on the American Northeast, though, and it’s pretty, well, awesome, in the original sense of the word:

Of course, it’s all about which direction the wind blows, in the end.

And remember… that’s just a single bomb!

Of course, if you’re interested in the more diffuse amounts of radioactivity — more than just the stuff that you know is probably bad for you — the fallout maps get even more interesting. Here’s what the BRAVO fallout did over the next month or so after the detonation:1

Now, you can’t see the numbers there, but they aren’t high — it’s not the same as being immediately downwind of these things. They’re low numbers… but they’re non-zero. But one of the “special” things about nuclear contaminants is that you can track them for a very long time, and see exactly how one test — or accident — in a remote area is intimately connected to the entire rest of the planet. 

And, in fact, nearly everyone born during the era of atmospheric nuclear testing had some tiny bits of fallout in their bones — you can even use it to determine how old a set of teeth are, to a very high degree of accuracy, by measuring their fallout content. (And before you think atmospheric testing is a matter of ancient history, remember that France and China both tested atmospheric nuclear weapons long after the Limited Test Ban Treaty! The last atmospheric test, by China, was in 1980!)

The same sorts of maps are used to show the dispersion of radioactive byproducts of nuclear reactors when accidents occur. I find these things sort of hypnotizing. Here are four “frames” from a simulation run by Lawrence Livermore National Laboratory on their ARAC computer showing the dispersion of radioactivity after the Chernobyl accident in 1986:2

Chernobyl ARAC simulation, day 2

Chernobyl ARAC simulation, day 4

Chernobyl ARAC simulation, day 6

Chernobyl ARAC simulation, day 10

Pretty incredible, no? Now, the odds are that there are lots of other contaminants that, could we track them, would show similar world-wide effects. Nuclear may not be unique in the fact that it has global reach — though the concentrations of radioactivity are far higher than you’d find anywhere else — but it may be unique that you can always measure it. 

Yesterday I saw a new set of plots predicting the dispersion of Caesium-137 after the Fukushima accident from 2011. These are just models, not based on measurements; and all models have their issues, as the modelers at the Centre d’Enseignement et de Recherche en Environnement Atmosphérique (CEREA) who produced these plots acknowledge.

Here is their map for Cs-137 deposition after Fukushima. I’m not sure what the numbers really mean, health-wise, but the long reach of the accident is dramatic:

Map of ground deposition of caesium-137 for the Fukushima-Daichii accident

Map of ground deposition of caesium-137 for the Fukushima-Daichii accident by Victor Winiarek, Marc Bocquet, Yelva Roustan, Camille Birman, and Pierre Tran at CEREA. (Source)

Compare with Chernobyl. (Warning: the scales of these two images are different, so the colors don’t map onto the same values. This is kind of annoying and makes it hard to compare them, though it illustrates well the local effects of Chernobyl as compared to Fukushima.)

Map of ground deposition of caesium-137 for the Chernobyl accident

Map of ground deposition of caesium-137 for the Chernobyl accident, by Victor Winiarek, Marc Bocquet, Yelva Roustan, Camille Birman, and Pierre Tran at CEREA. (Source)

Lastly, they have an amazing animated map showing the plume as it expands across the Pacific. It’s about 5MB in size, and a Flash SWF, so I’m just going to link to it here. But you must check it out — it’s hypnotic, strangely beautiful, and disturbing. Here is a very stop-motion GIF version derived from their map, just to give you an incentive to see the real thing, which is much more impressive:

Fukushima-Daichii activity in the air (caesium-137, ground level) (animated)

There’s plenty of fallout for everyone — well enough to go around. No need to be stingy. And nearly seven decades into the nuclear age, there’s a little bit of fallout in everyone, too.

Update: The CEREA site seems to be struggling a bit. Here’s a locally-hosted version of the full animation. I’ll remove this when CEREA gets up and running again…

Notes
  1. Image from “Nature of Radioactive Fall-Out and Its Effects on Man, Part 1,” Hearings of the Joint Committee on Atomic Energy, Special Joint Subcommittee on Radiation (May 27-29 and June 3, 1957), on 169. []
  2. These images are courtesy of the DOE Digital Archive. []
Redactions

Bethe on SUNSHINE and Fallout (1954)

Wednesday, June 27th, 2012

Project SUNSHINE definitely takes the prize for “most intentionally-misleading title of a government program.” The goal of SUNSHINE (co-sponsored by the Atomic Energy Commission and RAND) was to figure out what the impact radioactive fallout from American nuclear testing was on the world population. The initial study was started in 1953, and involved checking biological material for the the radioactive fission product Strontium-90, with an attempt to correlate Sr-90 levels with various known nuclear test series. Not exactly what you think of when you hear the term “sunshine,” eh?

It actually gets much creepier than just the confusing name. The “biological material” they were studying was, well, dead organic matter. What kind of organic matter, specifically? The dataset for a “pre-pilot” study on Strontium-90 intake, was a real witches brew:

  • “Wisconsin cheese (1 month old)”
  • “clam shells (Long Island)”
  • “Wisconsin cat bone”
  • “Montana cat (6 months, fed on milk from free-range cows)”
  • “stillborn, full term baby (Chicago)”
  • “rib from a Harvard man” 

Pardon me while I count my ribs… and cats… and… well… yuck. You can’t make this stuff up. Well, I can’t, anyway. Here’s your creepy meeting transcript of the week, from the planning of SUNSHINE: “Dr. Libby commented on the difficulty of obtaining human samples, and suggested that stillborn babies, which are often turned over to the physician for disposal, might be a practical source.”1

As an aside to an aside, in the full study, they did use samples from corpses — corpses of children in particular seemed of particular interest — in getting their data. It’s a bit gory to read through their data sets as they describe the Sr-90 they found in the ribs or vertebrae of the dead. US scientist Shields Warren in particular seemed to have quite a lot of access to the bones of young children through the Cancer Research Institute in Boston, Massachusetts. Not a job I’d envy.2

Anyway — the document I wanted to share had nothing to do with the sample sources, but I got a little distracted while poking around in the SUNSHINE literature, and couldn’t not pass that on.

Hans Bethe and W.F. Libby

The letter in question comes from 1954, after SUNSHINE had been completed. It’s a request from December 1954 from the well-coifed Hans Bethe to the aforementioned Willard F. Libby, the physical chemist best known as the inventor of radiocarbon dating (for which he would win a Nobel Prize, in 1960), and in 1954 one of the five Commissioners of the AEC.3 In the letter, Bethe is arguing in favor of SUNSHINE’s declassification — and his justifications are not necessarily what you might expect.4

Click to view PDF (yes, it’s in color!)

Bethe started out by noting that even in the summer of 1953, when SUNSHINE was being finished up, they (it seems that Bethe and Libby were both there) thought that it would “be highly desirable to declassify a large part of project SUNSHINE.” Bethe thought the matter has gotten rather urgent:

I still feel the same way about this, and I think the arguments for declassification have become far stronger than they were in 1953. There is real unrest both in this country and abroad concerning the long-range as well as short-range radioactivity, and it would, in my opinion, greatly allay the fears of the public if the truth were published.

There’s the kicker: Bethe was convinced that SUNSHINE will show that fallout from testing isn’t as big a problem as people thought it was. Releasing SUNSHINE wouldn’t be a matter of propaganda (and holding it back wasn’t a matter of covering it up), in Bethe’s mind — it would simply be getting the facts out.

And why might people suddenly be getting concerned about nuclear fallout?

Map showing points (X) where contaminated fish were caught or where the sea was found to be excessively radioactive, following the Castle Bravo nuclear test.

No doubt because of all of the attention that the Castle BRAVO nuclear test had gotten with respects to high amounts of fallout finding its way into all sorts of biological systems far from its source — like the radioactive tuna that was caught for weeks afterwards off the waters of Japan.

Bethe understood, though, that the classification reasons holding back the publication of SUNSHINE were non-trivial. SUNSHINE studies the deposition of fission products following testing, and to make much sense of that, you had to know the fission yields from the tests. If you knew the fission yields, you’d know quite a lot about American nuclear weapons — especially if you knew the fission yield of the Ivy MIKE test, the first H-bomb.

Why? Because knowing the fission component of the first H-bomb test would possibly give away all sorts of information about the Teller-Ulam design. Multi-stage H-bombs have a reasonably large fission trigger that ignites the fusion fuel, which then again induces more fission in a “natural” uranium tamper. In the case of MIKE, 77% of the total 10.4 megaton yield came from the final fission stage. Knowing that would be a good hint as to the composition of the American H-bombs, and was not something they wanted to share with the USSR.

But Bethe thought you could get around this:

I believe the story of SUNSHINE could be published without giving away any information about our H-bombs: it is merely necessary to put the permissible accumulated yield in terms of fission yield rather than total yield.

In other words, if you just talked of fission yield — and didn’t give the total yield — you wouldn’t be able to figure out how much of the yield was not fission, and thus the high disparity (which would be a big red flag for a weapons designer) would be hidden.

Bethe also thought that they should publish the fallout data from the H-bomb tests (likely including those from the CASTLE series). Bethe didn’t think that information would give away any design information, but it was clear that others were suspicious. Bethe put the question to a test: he asked Philip Morrison to try and figure out how an H-bomb worked from just published stories about the Castle BRAVO fallout accident.

A youngish Philip Morrison, courtesy of the Emilio Segrè Visual Archives.

Morrison at that point had no access to classified information. He had been part of the Manhattan Project, and so knew quite a bit about fission weapons, but had been cut out of the classified world by the time the H-bomb had come along. (More on Morrison’s security clearance another time — lots of interesting stories there.)

Morrison’s conclusions (oddly title “FISSION ENERGY IN IVY,” even though it was about BRAVO) are attached to Bethe’s letter. In many ways it is an analysis typical of a somewhat cocky physicist: things are described as “easy” and conclusions are lead to “clearly” and everything is stated as if it is pretty obvious and pretty straightforward. Morrison concludes that the total fission yield of BRAVO (again, misidentified as IVY) is between 0.2Mt and 0.6Mt, and that most of the fission must have been from the fission primary that started the reactions. In reality, 10Mt of the 15Mt total yield was from fission, which is why it was such a “dirty” shot.

Bethe took this as evidence that indeed, looking at just the fallout alone, you couldn’t figure out how much of the explosion was from fission yield, and thus the design information was safe: “As Morrison’s report shows, it seems to be easy to draw entirely wrong conclusions from the fall-out data.”

Why Morrison got this wrong is a little mysterious to me. Ralph Lapp had managed to conclude, more or less correctly, that there was a third “dirty” fission stage, and had popularized the idea enough that it trickled into  Life magazine in December 1955. But Bethe thought Morrison’s analysis was more or less sound, given his lack of detailed information. It’s a weird thing to conclude, based on one study, that some piece of information is fundamentally unknowable, when you already know what the piece of information is.

Life magazine, 1955: not quite right, not entirely wrong.

Speaking of speculating based on missing information, part of Bethe’s letter is redacted, for reasons I do not know. His conclusion makes it pretty clear it has to do with this absolute vs. fission yield/fallout issue, though.

Bethe concludes: “I believe it would greatly improve international feeling about our Pacific tests if we were to publish the correct story of SUNSHINE and of fall-out.”

Libby would come around to Bethe’s position and push for declassification. In Libby’s mind, like Bethe’s, SUNSHINE showed that the world wasn’t going to become mutated just because of a little testing in the Pacific. Furthermore, he also came to believe that you could shut down a lot of the anti-nuclear testing demands by just showing people that you were paying close attention to this sort of thing — by the time of Operation Redwing (1956), he felt that this sort of disclosure had already made the international community more friendly to US testing.

It wasn’t until 1956 that the declassification eventually occurred, however, and even then, a lot of things were removed. (The “Amended*” in the RAND report cover page above is because it was “Amended to remove classified data; otherwise the report remains unchanged and represents the 1953 estimate of the fallout problem.”) Of course, by that point it was clear that the Soviets had already figured out how to make an H-bomb work.


Also! I will be giving a talk this Friday at the annual meeting of the Society for Historians of American Foreign Relations (SHAFR) in Hartford, CT. Just putting that out there.

Notes
  1. Minutes of the 36th Meeting of the General Advisory Committee to the U.S. Atomic Energy Commission (17, 18, and 19 August 1953), copy in the OHP Marshall Islands Document Collection. []
  2. E.g. E.A. Martell, “Strontium-90 Concentration Data for Biological Materials, Soils, Waters and Air Filters,” Project Sunshine Bulletin No. 12, [AECU-3297(Rev.)], (1 August 1956); human bone data listings start on page 29. []
  3. Libby was also the husband of Leona Woods, which I didn’t realize. Marshall was the only woman who had a role in the development of CP-1, the first nuclear reactor, and stands out quite conspicuously in the Met Lab photographs. []
  4. Citation: Hans Bethe to W.F. Libby (17 December 1954), copy in Nuclear Testing Archive, Las Vegas, NV, document NV0032161. []