Posts Tagged ‘Atomic Energy Commission’

Meditations

Liminal 1946: A Year in Flux

Friday, November 8th, 2013

There are lots of important and exciting years that people like to talk about when it comes to the history of nuclear weapons. 1945 obviously gets pride of place, being the year of the first nuclear explosion ever (Trinity), the first  and only uses of the weapons in war (Hiroshima and Nagasaki), and the end of World War II (and thus the beginning of the postwar world). 1962 gets brought up because of the Cuban Missile Crisis. 1983 has been making a resurgence in our nuclear consciousness, thanks to lots of renewed interest in the Able-Archer war scare. All of these dates are, of course, super important.

Washington Post - January 1, 1946

But one of my favorite historical years is 1946. It’s easy to overlook — while there are some important individual events that happen, none of them are as cataclysmic as some of the events of the aforementioned years, or even some of the other important big years. But, as I was reminded last week while going through some of the papers of David Lilienthal and Bernard Baruch that were in the Princeton University archives, 1946 was something special in and of itself. It is not the big events that define 1946, but the fact that it was a liminal year, a transition period between two orders. For policymakers in the United States, 1946 was when the question of “what will the country’s attitude towards the bomb be?” was still completely up for grabs, but over the course of the year, things became more set in stone.

1946 was a brief period when anything seemed possible. When nothing had yet calcified. The postwar situation was still fluid, and the American approach towards the bomb still unclear.

Part of the reason for this is because things went a little off the rails in 1945. The bombs were dropped, the war had ended, people were pretty happy about all of that. General Groves et al. assumed that Congress would basically take their recommendations for how the bomb should be regarded in the postwar (by passing the May-Johnson Bill, which military lawyers, with help from Vannevar Bush and James Conant, drafted in the final weeks of World War II). At first, it looked like this was going to happen — after all, didn’t Groves “succeed” during the war? But in the waning months of 1945, this consensus rapidly deteriorated. The atomic scientists on the Manhattan Project who had been dissatisfied with the Army turned out to make a formidable lobby, and they found allies amongst a number of Senators. Most important of these was first-term Senator Brien McMahon, who quickly saw an opportunity to jump into the limelight by making atomic energy his issue. By the end of the year, not only did Congressional support fall flat for the Army’s Bill, but even Truman had withdrawn support for it. In its place, McMahon suggested a bill that looked like something the scientists would have written — a much freer, less secret, civilian-run plan for atomic energy.

So what happened in 1946? Let’s just jot off a few of the big things I have in mind.

January: The United Nations meets for the first time. Kind of a big deal. The UN Atomic Energy Commission is created to sort out questions about the future of nuclear technology on a global scale. Hearings on the McMahon Bill continue in Congress through February.

Igor Gouzenko (masked) promoting a novel in 1954. The mask was to help him maintain his anonymity, but you have to admit it adds a wonderfully surreal and theatrical aspect to the whole thing.

Igor Gouzenko (masked) promoting a novel in 1954. The mask was to help him maintain his anonymity, but you have to admit it adds a wonderfully surreal and theatrical aspect to the whole thing.

February: The first Soviet atomic spy ring is made public when General Groves leaks information about Igor Gouzenko to the press. Groves wasn’t himself too concerned about it — it was only a Canadian spy ring, and Groves had compartmentalized the Canadians out of anything he considered really important — but it served the nice purpose of dashing the anti-secrecy lobby onto the rocks.

Also in February, George F. Kennan sends his famous “Long Telegram” from Moscow, arguing that the Soviet Union sees itself in essential, permanent conflict with the West and is not likely to liberalize anytime soon. Kennan argues that containment of the USSR through “strong resistance” is the only viable course for the United States.

March: The Manhattan Engineer District’s Declassification Organization starts full operation. Groves had asked the top Manhattan Project scientists to come up with the first declassification rules in November 1945, when he realized that Congress wasn’t going to be passing legislation as soon as he expected. They came up with the first declassification procedures and the first declassification guides, inaugurating the first systematic approach to deciding what was secret and what was not.

Lilienthal's own copy of the mass-market edition of the Acheson-Lilienthal Report, from the Princeton University Archives.

Lilienthal’s own copy of the mass-market edition of the Acheson-Lilienthal Report, from the Princeton University Archives.

March: The Acheson-Lilienthal Report is completed and submitted, in secret, to the State Department. It is quickly leaked and then was followed up by a legitimate publication by the State Department. Created by a sub-committee of advisors, headed by TVA Chairman David Lilienthal and with technical advice provided by J. Robert Oppenheimer, the Acheson-Lilienthal Report argued that the only way to a safe world was through “international control” of atomic energy. The scheme they propose is that the United Nations create an organization (the Atomic Development Authority) that would be granted full control over world uranium stocks and would have the ability to inspect all facilities that used uranium in significant quantities. Peaceful applications of atomic energy would be permitted, but making nuclear weapons would not be. If one thought of it as the Nuclear Non-Proliferation Treaty, except without any authorized possession of nuclear weapons, one would not be too far off the mark. Of note is that it is an approach to controlling the bomb that is explicitly not about secrecy, but about physical control of materials. It is not loved by Truman and his more hawkish advisors (e.g. Secretary of State Byrnes), but because of its leak and subsequent publication under State Department header, it is understood to be “the” position of the United States government on the issue.

April: The McMahon Act gets substantial modifications while in committee, including the creation of a Military Liaison Committee (giving the military an official position in the running of the Atomic Energy Commission) and the introduction of a draconian secrecy provision (the “restricted data” concept that this blog takes its name from).

June: The Senate passes the McMahon Act. The House starts to debate it. Several changes are made to the House version of the bill — notably all employees with access to “restricted data” must now be investigated by the FBI and the penalty for misuse or espionage of “restricted data” is increased to death or life imprisonment. Both of these features were kept in the final version submitted to the President for signature in July.

June: Bernard Baruch, Truman’s appointee to head the US delegation of the UN Atomic Energy Commission, presents a modified form of the Acheson-Lilienthal Report to the UNAEC, dubbed the Baruch Plan. Some of the modifications are substantial, and are deeply resented by people like Oppenheimer who see them as torpedoing the plan. The Baruch Plan, for example, considered the question of what to do about violations of the agreement something that needed to be hashed out explicitly and well in advance. It also argued that the United States would not destroy its (still tiny) nuclear stockpile until the Soviet Union had proven it was not trying to build a bomb of their own. It was explicit about the need for full inspections of the USSR — a difficulty in an explicitly closed society — and stripped the UN Security Council of veto power when it came to enforcing violations of the treaty. The Soviets were, perhaps unsurprisingly, resistant to all of these measures. Andrei Gromyko proposes a counter-plan which, like the Baruch Plan, prohibits the manufacture and use of atomic weaponry. However, it requires full and immediate disarmament by the United States before anything else would go into effect, and excludes any international role in inspection or enforcement: states would self-regulate on this front.

Shot "Baker" of Operation Crossroads — one of the more famous mushroom clouds of all time. Note that the mushroom cloud itself is not the wide cloud you see there (which is a brief condensation cloud caused by it being an underwater detonation), but is the more bulbous cloud you see peaking out of the top of that cloud. You can see the battleships used for target practice near base of the cloud. The dark mark on the right side of the stem may be an upturned USS Arkansas.

Shot “Baker” of Operation Crossroads — one of the more famous mushroom clouds of all time. Note that the mushroom cloud itself is not the wide cloud you see there (which is a brief condensation cloud caused by it being an underwater detonation), but is the more bulbous cloud you see peaking out of the top of that cloud. You can see the battleships used for target practice near base of the cloud. The dark mark on the right side of the stem may be an upturned USS Arkansas.

July: The first postwar nuclear test series, Operation Crossroads, begins in the Bikini Atoll, Marshall Islands. Now this is a curious event. Ostensibly the United States was in favor of getting rid of nuclear weapons, and in fact had not yet finalized its domestic legislation about the bomb. But at the same time, it planned to set off three of them, to see their effect on naval vessels. (They decided to only set off two, in the end.) The bombs were themselves still secret, of course, but it was decided that this event should be open to the world and its press. Even the Soviets were invited! As one contemporary report summed up:

The unique nature of the operation was inherent not only in its huge size — the huge numbers of participating personnel, and the huge amounts of test equipment and number of instruments involved — it was inherent also in the tremendous glare of publicity to which the tests were exposed, and above all the the extraordinary fact that the weapons whose performance was exposed to this publicity were still classified, secret, weapons, which had never even been seen except by a few men in the inner circles of the Manhattan District and by those who had assisted in the three previous atomic bomb detonations. It has been truly said that the operation was “the most observed, most photographed, most talked-of scientific test ever conducted.” Paradoxically, it may also be said that it was the most publicly advertised secret test ever conducted.1

August: Truman signs the McMahon Act into law, and it becomes the Atomic Energy Act of 1946. It stipulates that a five-person Atomic Energy Commission will run all of the nation’s domestic atomic energy affairs, and while half of the law retains the “free and open” approach of the early McMahon Act, the other half has a very conservative and restrictive flavor to it, promising death and imprisonment to anyone who betrays atomic secrets. The paradox is explicit, McMahon explained at the time, because finding a way to implement policy between those two extremes would produce rational discussion. Right. Did I mention he was a first-term Senator? The Atomic Energy Commission would take over from the Manhattan Engineer District starting in 1947.

A meeting of the UN Atomic Energy Commission in October 1946. Bernard Baruch is the white-haired man sitting at the table at right behind the “U.S.A” plaque. At far top-right of the photo is Robert Oppenheimer. Two people above Baruch, in the very back, is General Groves. Directly below Groves is Manhattan Project scientist Richard Tolman. British physicist James Chadwick sits directly behind the U.K. representative at the table.

A meeting of the UN Atomic Energy Commission in October 1946. At front left, speaking, is Andrei Gromyko. Bernard Baruch is the white-haired man sitting at the table at right behind the “U.S.A” plaque. At far top-right of the photo is a pensive J. Robert Oppenheimer. Two people above Baruch, in the very back, is a bored-looking General Groves. Directly below Groves is Manhattan Project scientist Richard Tolman. British physicist James Chadwick sits directly behind the U.K. representative at the table.

September: Baruch tells Truman that international control of atomic energy seems nowhere in sight. The Soviet situation has soured dramatically over the course of the year. The Soviets’  international control plan, the Gromyko Plan, requires full faith in Stalin’s willingness to self-regulate. Stalin, for his part, is not willing to sign a pledge of disarmament and inspection while the United States is continuing to build nuclear weapons. It is clear to Baruch, and even to more liberal-minded observers like Oppenheimer, that the Soviets are probably not going to play ball on any of this, because it would not only require them to forswear a potentially important weapon, but because any true plan would require them to become a much more open society.

October: Truman appoints David Lilienthal as the Chairman of the Atomic Energy Commission. Lilienthal is enthusiastic about the job — a New Deal technocrat, he thinks that he can use his position to set up a fairly liberal approach to nuclear technology in the United States. He is quickly confronted by the fact that the atomic empire established by the Manhattan Engineer District has decayed appreciably in year after the end of the war, and that he has powerful enemies in Congress and in the military. His confirmation hearings start in early 1947, and are exceptionally acrimonious. I love Lilienthal as an historical figure, because he is an idealist who really wants to accomplish good things, but ends up doing almost the opposite of what he set out to do. To me this says a lot about the human condition.

November: The US Atomic Energy Commission meets for the first time in Oak Ridge, Tennessee. They adopt the declassification system of the Manhattan District, among other administrative matters.

December: Meredith Gardner, a cryptanalyst for the US Army Signal Intelligence Service, achieves a major breakthrough in decrypting wartime Soviet cables. A cable from 1944 contains a list of scientists working at Los Alamos — indications of a serious breach in wartime atomic security, potentially much worse than the Canadian spy ring. This information is kept extremely secret, however, as this work becomes a major component in the VENONA project, which (years later) leads to the discovery of Klaus Fuchs, Julius Rosenberg, and many other Soviet spies.

On Christmas Day, 1946, the Soviet Union’s first experimental reactor, F-1, goes critical for the first time.

The Soviet F-1 reactor, in 2009. It remains operational today — the longest-lived nuclear reactor by far.

The Soviet F-1 reactor, in 2009. It remains operational today — the longest-lived nuclear reactor by far.

No single event on that list stands out as on par with Hiroshima, the Cuban Missile Crisis, or even the Berlin Crisis. But taken together, I think, the list makes a strong argument for the importance of 1946. When one reads the documents from this period, one gets this sense of a world in flux. On the one hand, you have people who are hoping that the re-ordering of the world after World War II will present an enormous opportunity for creating a more peaceful existence. The ideas of world government, of the banning of nuclear weapons, of openness and prosperity, seem seriously on the table. And not just by members of the liberal elite, mind you: even US Army Generals were supporting these kinds of positions! And yet, as the year wore on, the hopes began to fade. Harsher analysis began to prevail. Even the most optimistic observers started to see that the problems of the old order weren’t going away anytime soon, that no amount of good faith was going to get Stalin to play ball. Which is, I should say, not to put all of the onus on the Soviets, as intractable as they were, and as awful as Stalin was. One can imagine a Cold War that was less tense, less explicitly antagonistic, less dangerous, even with limitations that the existence of a ruler like Stalin imposed. But some of the more hopeful things seem, with reflection, like pure fantasy. This is Stalin we’re talking about, after all. Roosevelt might have been able to sweet talk him for awhile, but even that had its limits.

We now know, of course, that the Soviet Union was furiously trying to build its own atomic arsenal in secret during this entire period. We also know that the US military was explicitly expecting to rely on atomic weapons in any future conflict, in order to offset the massive Soviet conventional advantage that existed at the time. We know that there was extensive Soviet espionage in the US government and its atomic program, although not as extensive as fantasists like McCarthy thought. We also know, through hard experience, that questions of treaty violations and inspections didn’t go away over time — if anything, I think, the experience of the Nuclear Non-Proliferation Treaty has shown that many of Baruch’s controversial changes to the Acheson-Lilienthal Report were pretty astute, and quickly got to the center of the political difficulties that all arms control efforts present.

As an historian, I love periods of flux and of change. (As an individual, I know that living in “interesting times” can be pretty stressful!) I love looking at where old orders break down, and new orders emerge. The immediate postwar is one such period — where ideas were earnestly discussed that seemed utterly impossible only a few years later. Such periods provide little windows into “what might have been,” alternative futures and possibilities that never happened, while also reminding us of the forces that bent things to the path they eventually went on.

Notes
  1. Manhattan District History, Book VIII, Los Alamos Project (Y) – Volume 3, Auxiliary Activities, Chapter 8, Operation Crossroads (n.d., ca. 1946). []
Redactions | Visions

Castle Bravo revisited

Friday, June 21st, 2013

No single nuclear weapons test did more to establish the grim realities of the thermonuclear age than Castle BRAVO. On March 1, 1954, it was the highest yield test in the United States’ highest-yield nuclear test series, exploding with a force of 15 million tons of TNT. It was also the greatest single radiological disaster in American history. 

Castle BRAVO, 3.5 seconds after detonation, photo taken from a distance of 75 nautical miles from ground zero, from an altitude of 12,500 feet. From DTRIAC SR-12-001.

Castle BRAVO, 3.5 seconds after detonation, photo taken from a distance of 75 nautical miles from ground zero, from an altitude of 12,500 feet. From DTRIAC SR-12-001.

Among BRAVO’s salient points:

  • It was the first “dry fuel” hydrogen bomb test by the United States, validating that lithium-deuteride would work fine as a fusion fuel and making thermonuclear weapons relatively easy to deploy.
  • It had a maximum predicted yield of only 6 megatons — so it was 250% more explosive than was expected.
  • And, of course, it became famous for raining nuclear fallout down on inhabited islands over a hundred miles downwind, and exposing a crew of Japanese fishermen to fatal levels of radiation.

It was this latter event that made BRAVO famous — so famous that the United States had to admit publicly it had a hydrogen bomb. And accidentally exposing the Japanese fishing supply to radiation, less than a decade after Hiroshima and Nagasaki, has a way of making the Japanese people understandably upset. So the shot led to some almost frank discussion about what fallout meant — that being out of the direct line of fire wasn’t actually good enough.

Animation showing the progression of BRAVO's fallout exposure, at 1, 3, 6, 12, and 18 hours. Original source.

Animation showing the progression of BRAVO’s fallout exposure, at 1, 3, 6, 12, and 18 hours. Original source.

I say “almost frank” because there was some distinct lack of frankness about it. Lewis Strauss, the secrecy-prone AEC Chairman at the time and an all-around awful guy, gave some rather misleading statements about the reasons for the accident and its probable effects on the exposed native populations. His goal was reassurance, not truth. But, as with so many things in the nuclear age, the narrative got out of his control pretty quickly, and the fear of fallout was intensified whether he wanted it to be or not.

We now know that the Marshallese suffered quite a lot of long-term harm from the exposures, and that the contaminated areas were contaminated for a lot longer than the AEC guessed they would be. Some of this discrepancy comes from honest ignorance — the AEC didn’t know what they didn’t know about fallout. But a lot of it also came from a willingness to appear on top of the situation, when the AEC was anything but.

"Jabwe, the Rongelap health practitioner, assists Nurse Lt. M. Smith and Dr. Lt. J. S. Thompson, during a medical examination on Kwajalein, 11 March 1954." From DTRIAC SR-12-001.

“Jabwe, the Rongelap health practitioner, assists Nurse Lt. M. Smith and Dr. Lt. J. S. Thompson, during a medical examination on Kwajalein, 11 March 1954.” From DTRIAC SR-12-001.

I’ve been interested in BRAVO lately because I’ve been interested in fallout. It’s no secret that I’ve been working on a big new NUKEMAP update (I expect it to go live in a month or so) and that fallout is but one of the new amazing features that I’m adding. It’s been a long-time coming, since I had originally wanted to add a fallout model a year ago, but it turned out to be a non-trivial thing to implement. It’s not hard to throw up a few scaled curves, but coming up with a model that satisfies the aesthetic needs of the general NUKEMAP user base (that is, the people who want it to look impressive but aren’t interested in the details) and also has enough technical chops so that the informed don’t just immediately dismiss it (because I care about you, too!) involved digging up some rather ancient fallout models from the Cold War (even going out to the National Library of Medicine to get one rare one in its original paper format) and converting them all to Javascript so they can run in modern web browsers. But I’m happy to say that as of yesterday, I’ve finally come up with something that I’m pleased with, and so I can now clean up my Beautiful Mind-style filing system from my office and living room.

Why yes, you can

The most famous version of BRAVO’s total-dose exposure contours, from Glasstone and Dolan. It looks great on a mug, by the way.

Recently I was sent a PDF of a recent report (January 2013) by the Defense Threat Reduction Information Analysis Center (DTRIAC) that looked back on the history of BRAVO. It doesn’t seem to be easily available online (though it is unclassified), so I’ve posted it here: “Castle Bravo: Fifty Years of Legend and Lore (DTRIAC SR-12-001).” I haven’t had time to read the whole thing, but skipping around has been rewarding — it takes a close look at the questions of fallout prediction, contamination, and several “myths” that have circulated since 1954. It notes that the above fallout contour plot, for example, was originally created by the USAF Air Research and Development Command (ARDC), and that “it is unfortunate that this illustration has been so widely distributed, since it is incorrect.” The plume, they explain, actually under-represents the extent of the fallout — the worst of the fallout went further and wider than in the above diagram.

You can get a sense of the variation by looking at some of the other plots created of the BRAVO plume:

BRAVO fallout contours produced by the AFSWP, NRDL, and RAND Corp. Source.

BRAVO fallout contours produced by the Armed Forces Special Weapons Project, Naval Radiological Defense Laboratory, and the RAND Corporation. Source. Click image to enlarge.

The AFSWP diagram on the left is relatively long and narrow; the NRDL one in the middle is fat and horrible. The RAND one at the right is something of a compromise. All three, though, show the fallout going further than the ADRC model — some 50-100 miles further. On the open ocean that doesn’t matter so much, but apply that to a densely populated part of the world and that’s pretty significant!

DTRIAC SR-12-001 is also kind of amazing in that it has a lot of photographs of BRAVO and the Castle series that I’d never seen before, some of which you’ll see around this post. One of my favorites is this one, of Don Ehlher (from Los Alamos) and Herbert York (from Livermore) in General Clarkson’s briefing room on March 17, 1954, with little mockups of the devices that were tested in Operation Castle:

Ehler and York - Operation Castle devices

There’s nothing classified there — the shapes of the various devices have long been declassified — but it’s still kind of amazing to see of their bombs on the table, as it were. They look like thermoses full of coffee. (The thing at far left might be a cup of coffee, for all that I can tell —  unfortunately the image quality is not great.)

It also has quite a lot of discussion of several persistent issues regarding the exposure of the Japanese crew and the Marshallese natives. I didn’t see anything especially new here, other than the suggestion that the fatality from the Fortunate Dragon fishing boat might have been at least partially because of the very aggressive-but-ineffective treatment regime prescribed by the Japanese physicians, which apparently included the very dubious procedure of repeatedly drawing his blood and then re-injecting it into muscle tissue. I don’t know enough of the details to know what to think of that, but at least they do a fairly good job of debunking the notion that BRAVO’s contamination of the Marshallese was deliberate. I’ve seen that floating around, even in some fairly serious forums and publications, and it’s just not supported by real evidence.

Castle BRAVO, 62 seconds after detonation. "This image was take at a distance of 50 [nautical miles] north GZ from an altitude of 10,000 feet. The lines running upward to the left of the stem and below the fireball are smoke trails from small rockets. At this time the cloud stem was about 4 mi in diameter." From DTRIAC SR-12-001.

Castle BRAVO, 62 seconds after detonation. “This image was take at a distance of 50 [nautical miles] north GZ from an altitude of 10,000 feet. The lines running upward to the left of the stem and below the fireball are smoke trails from small rockets. At this time the cloud stem was about 4 mi in diameter.” From DTRIAC SR-12-001.

One thing that I hadn’t appreciated as well before is that BRAVO is pretty much a worst-case scenario from a radiological point of view. It was a very high-yield weapon that was very “dirty” right out of the box: 10 of its 15 megatons (67%) were from fission.1

It was detonated as a surface burst, which automatically means quite a significant fallout problem. Nuclear weapons that detonate so that their fireball does not come into contact with the ground release “militarily insignificant” amounts of fallout, even if their yields are very high. (They are not necessarily “humanly insignificant” amounts, but they are far, far, far less than surface bursts — it is not a subtle difference.2 )

But even worse, it was a surface burst in a coral reef, which is just a really, really bad idea. Detonating nuclear weapons on a desert floor, like in Nevada, still presents significant fallout issues. But a coral reef is really an awful place to set them off, and not just because coral reefs are awesome and shouldn’t be blown up. They are an ideal medium for creating and spreading contamination: they break apart with no resistance, but do so in big enough chunks that they rapidly fall back to Earth. Particle size is a big deal when it comes to fallout; small particles go up with the fireball and stay aloft long enough to lose most of their radioactive energy and diffuse into the atmosphere, while heavy particles fall right back down again pretty quickly, en masse. So blowing up and irradiating something like coral is just the worst possible thing.3

Castle BRAVO, 16 minutes after detonation, seen from a distance of 50 nautical miles, at an altitude of 10,000 feet. From DTRIAC SR-12-001.

Castle BRAVO, 16 minutes after detonation, seen from a distance of 50 nautical miles, at an altitude of 10,000 feet. From DTRIAC SR-12-001.

Note that the famous 50 Mt “Tsar Bomba” lacked a final fission stage and so only 3% of its total yield — 1.5 Mt — was from fission. So despite the fact that the Tsar Bomba was 3.3 times more explosive than Castle Bravo, it had almost 7 times fewer fission products. And its fireball never touched the ground (in fact, it was reflected upwards by its own shock wave, which is kind of amazing to watch), so it was a very “clean” shot radiologically. The “full-sized,” 100 Mt Tsar Bomba would have been 52% fission — a very dirty bomb indeed.

In the end, what I’ve come to take away from BRAVO is that it actually was a mistake even more colossal than one might have originally thought. It was a tremendously bad idea from a human health standpoint, and turned into a public relations disaster that the Atomic Energy Commission never really could kick. 

In retrospect the entire “event” seems to have been utterly avoidable as a radiological disaster, even with all of the uncertainties about yield and weather. It’s cliché to talk about nuclear weapons in terms of playing with “forces of nature beyond our comprehension,” but I’ve come to feel that BRAVO is a cautionary tale about hubris and incompetence in the nuclear age — scientists setting off a weapon whose size they did not know, whose effects they did not correctly forecast, whose legacy will not soon be outlived.


Bonus! I have a multimedia piece just up on the website of the Bulletin of the Atomic Scientists:The faces that made the Bomb.” Who told the FBI that Charlotte Serber might be Communist? Why does the film footage of the “Trinity” test jump at the end? Why did the “Trinity” test director say, “Now we are all sons of bitches”? You’ll learn that and much more…

Notes
  1. Chuck Hansen, Swords of Armageddon, IV-299. []
  2. The count difference is about three orders of magnitude or so less, judging by shots like Redwing CHEROKEE. That’s still a few rads, but the difference between 1,000 and 1 rad/hr is pretty significant. []
  3. Couldn’t they have foreseen this? In theory, yes — they had already blown up a high-yield, “dirty” fission hydrogen bomb on a coral reef in 1952, the MIKE test. But somewhere a number of AEC planners seem to have gotten their wires crossed, because a lot of them thought that MIKE had very little fallout, when in fact it also produced a lot of very similar contamination. Unlike BRAVO, however, MIKE’s fallout blew out over open sea. The only radiation monitoring seems to have been done on the islands, and so they don’t seem to have ever drawn up one of those cigar-shaped plumes for it. See e.g. the discussion here on page 51. []
Meditations | Redactions

The Problem of Redaction

Friday, April 12th, 2013

Redaction is one of those practices we take for granted, but it is actually pretty strange if you think about it. I mean, who would imagine that the state would say, “well, all of this is totally safe for public consumption, except for a part right here, which is too awful to be legally visible, so I’ll just blot out that part. Maybe I’ll do it in black, maybe in white, maybe I’ll add DELETED in big bold letters, just so I know that you saw that I deleted it.”

From Hans Bethe's "Memorandum on the History of the Thermonuclear Program" (1952), which features some really provocative DELETED stamps. A minimally-redacted version assembled from many differently redacted copies by Chuck Hansen is available here.

From Hans Bethe’s “Memorandum on the History of the Thermonuclear Program” (1952), which features some really provocative DELETED stamps. A minimally-redacted version assembled from many differently redacted copies by Chuck Hansen is available here.

From a security perspective, it’s actually rather generous. The redactor is often giving us the context of the secret, the length of the material kept from us (a word? a sentence? a paragraph? a page?), and helpfully drawing our eye to the parts of the document that still contain juicy bits. The Onion’s spoof from a few years back, “CIA Realizes It’s Been Using Black Highlighters All These Years,” is only slightly off from the real truth. Blacking something out is only a step away from highlighting its importance, and the void makes us curious. In fact, learning what was actually in there can be quite anticlimactic, just as learning how a magician does their trick (“the guy in the audience is in on the trick”).

And, of course, the way the US declassification system is set up virtually guarantees that multiple, differently-redacted copies of documents will eventually exist. Carbon copies of the same documents exist in multiple agencies, and each agency can be separately petitioned for copies of their files, and they will send them to individual reviewers, and they will each review their guides and try and interpret them. There’s very little centralization, and lots of individual discretion in interpreting the guides.

The National Security Archive recently posted an Electronic Briefing Book that was very critical of this approach. In their case, they pointed out that a given paragraph in a once-secret document that was deemed by the redactor to be completely safe in 2001 was in 2003 deemed secret again, and then, in 2007, reaffirmed safe, and then, in 2012, again secret. “There often seems little logic to redaction decisions, which depend on the whim of the individual reviewer, with no appreciation of either the passage of time or the interests of history and accountability,” writes Michael Dobbs.

This sort of thing happens all the time, of course. In the National Security Archive’s Chuck Hansen papers there are bundles of little stapled “books” he would create of multiply, differently-redacted copies of the same document. They are a fun thing to browse through, viewing four different versions of the same page, each somewhat differently hacked up.

A page from a 1951 meeting transcript of the General Advisory Committee, from the Hansen files. Animated to show how he staples three different copies together. Some documents contain five or more separate versions of each page. For closer inspections of the page, click here.

A page from a 1951 meeting transcript of the General Advisory Committee, from the Hansen files. Animated to show how he staples three different copies together. Some documents contain five or more separate versions of each page. For closer inspections of the page, click here.

In the case of Hansen’s papers, these differences came about because he was filing Freedom of Information Act requests (or looking at the results of other’s requests) over extended periods of time to different agencies. The passage of time is important, because guides change in the meantime (usually towards making things less secret; “reclassification” is tricky). And the multiple sites means you are getting completely different redactors looking at it, often with different priorities or expertise.

Two different redactors, working with the exact same guides, can come up with very different interpretations. This is arguably inherent to any kind of classifying system, not just one for security classifications. (Taxonomy is a vicious profession.) The guides that I have seen (all historical ones, of course) are basically lists of statements and classifications. Sometimes the statements are very precise and technical, referencing specific facts or numbers. Sometimes they are incredibly broad, referencing entire fields of study. And they can vary quite a bit — sometimes they are specific technical facts, sometimes they are broad programmatic facts, sometimes they are just information about meetings that have been held. There aren’t any items that, from a distance, resemble flies, but it’s not too far off from Borges’ mythical encyclopedia.

The statements try to be clear, but if you imagine applying them to a real-life document, you can see where lots of individual discretion would come into the picture. Is fact X implied by sentence Y? Is it derivable, if paired with sentence Z? And so on. And there’s a deeper problem, too: if two redactors identify the same fact as being classified, how much of the surrounding context do they also snip out with it? Even a stray preposition can give away information, like whether the classified word is singular or plural. What starts as an apparently straightforward exercise in cutting out secrets quickly becomes a strange deconstructionist enterprise.

One of my favorite examples of differently redacted documents came to me through two Freedom of Information Act requests to the same agency at about the same time. Basically, two different people (I presume) at the Department of Energy looked at this document from 1970, and this was the result:

1970 AEC declassification guide redactions

In one, the top excerpt is deemed declassified and the bottom classified. In the other, the reverse. Put them together, and you have it all.  (While I’m at it, I’ll also just add that a lot of classified technical data looks more or less like the above: completely opaque if you aren’t a specialist. That doesn’t mean it isn’t important to somebody, of course. It is one of the reasons I am resistant to any calls for “common sense” classification, because I think we are well beyond the “common” here.) In this case, the irony is double, because what they’re de/classifying are excerpts from classification guides… very meta, no?1

What’s going on here? Did the redactors really interpret their guidelines in exactly the opposite ways? Or are both of these borderline cases where discretion was required? Or was it just an accident? Any of these could be plausible explanations, though I suspect they are each borderline cases and their juxtaposition is just a coincidence. I don’t actually see this as a symptom of dysfunction, though. I see it as a natural result of the kind of declassification system we have. It’s the function, not the dysfunction — it’s just that the function is set up to have these kinds of results.

The idea that you can slot all knowledge into neat little categories that perfectly overlap with our security concerns is already a problematic one, as Peter Galison has argued. Galison’s argument is that security classification systems assume that knowledge is “atomic,” which is to say, comes in discrete bundles that can be disconnected from other knowledge (read “atomic” like “atomic theory” and not “atomic bomb”). The study of knowledge (either from first principles or historically) shows exactly the opposite — knowledge is constituted by sending out lots of little tendrils to other bits of knowledge, and knowledge of the natural world is necessarily interconnected. If you know a little bit about one thing you often know a little bit about everything similar to it.

For this archive copy of a 1947 meeting of the General Advisory Committee, all of the raw numbers were cut out with X-Acto knives. Somewhere, one hopes, is an un-mutilated version...

For this archive copy of a 1947 meeting of the General Advisory Committee, all of the raw numbers were cut out with X-Acto knives. Somewhere, one hopes, is an un-mutilated version. In some cases, numbers like these were initially omitted in drawing up the original documents, and a separate sheet of numbers would be kept in a safe, to be produced only when necessary.

This is a good philosophical point, one that arguably is a lot stronger for scientific facts than many others (the number of initiators, for example, is a lot less easily connected to other facts than is, say, the chemistry of plutonium), but I would just add that layered on top of this is the practical problem of trying to get multiple human beings to agree on the implementations of these classifications. That is, the classification are already problematic, and now you’re trying to get people to interpret them uniformly? Impossible… unless you opt for maximum conservatism and a minimum of discretion. Which isn’t what anybody is calling for.

In theory, you can read the classification history of a document from all of its messy stamps and scribblings. They aren't just for show; they tell you what it's been through, and how to regard it now.

In theory, you can read the classification history of a document from all of its messy stamps and scribblings. They aren’t just for show; they tell you what it’s been through, and how to regard it now.

Declassification can be arbitrary, or at least appear arbitrary to those of us locked outside of the process. (It is one of the symptoms of secrecy that the logic of the redactor is itself usually secret.) But to me, the real sin of our current system is the lack of resources put towards it, which makes the whole thing run slow and leads to huge backlogs. When the system is running at a swift pace, you can at least know what it is they’re holding back from you, compare it to other sources, file appeals, draw attention to it, and so on. When it takes years to start processing requests (as is the case with the National Archives, in my experience; it varies a lot by agency), much less actually declassify them, there is a real impediment to research and public knowledge. I’d rather declassification be arbitrary and fast than conservative and slow.

That individual redactors individually interpreting the guidelines according to the standards they are told to use come up with different results doesn’t bother me as much. There is going to be a certain amount of error in any large system, especially one that deals with borderline cases and allows individual discretion. Sometimes you win, sometimes you lose, but it’s being able to play the game in the first place that matters the most to me.

Notes
  1. The document is a discussion of instances in which classification guidelines are based on strict numerical limits, as opposed to general concepts. Citation is: Murray L. Nash to Theos Thomson (3 November 1970), “AEC Classification Guidance Based on Numerical Limits,” part of SECY-625, Department of Energy Archives, RG 326, Collection 6 Secretariat, Box 7832, Folder 6, “O&M 7 Laser Classification Panel. The top was received as the response to a FOIA request I made in 2008, the bottom another one in 2010. Both were part of FOIA requests relating to declassification decisions relating to inertial confinement fusion; the memo in question was part of information given to a panel of scientists regarding creating new fusion classification guidelines. []
Visions

The story behind the IAEA’s atomic logo

Friday, January 11th, 2013

Since my post last week was such a bummer, I thought I’d do something a little more fun and trivial today.

The International Atomic Energy Agency (IAEA) has, without much competition, the coolest logo of any part of the UN.1 Heck, I’ll go so far as to say that they have the coolest logo of any atomic-energy organization in history. I mean, check this thing out:

IAEA flag

It’s not only an atom, it’s an atom with style. It’s got a classic late-1950s/early-1960s asymmetrical, jaunty swagger. Those electrons are swinging, baby! This is an atom for love, not war, if you dig what I’m saying. An atom that knows how to have fun, even when it’s doing serious business, like investigating your nuclear program. The James Bond of atoms.

The staid seal of the US Atomic Energy Commission cannot really compete for hipness, though it gets major nostalgia points and I love it dearly. The emblem of the Atomic Energy Organization of Iran is arguably the only real runner-up — check out that minimalism! Most worldwide atomic energy organizations/commissions have variously tacky rip-offs of the original AEC logo. The UK’s Atomic Energy Authority gets props for having the least cool emblem of any atomic energy agency, and also the least obviously atomic (the little sun at the top, and the Latin, somewhat give it away). On the other hand, it’s the only one that looks like it would be perfectly at home inside a Lewis Carroll book.

For awhile, I’ve been kind of obsessed with finding out who drew this thing. It looks remarkably similar to the aesthetic adopted by the Swiss designer Erik Nitsche, who did a lot of other groovy atomic posters for General Dynamics. This poster of Nitsche’s from 1955 has similarly jaunty orbitals, though I don’t think they’re meant to be electrons:

But upon further investigation, I’ve not found any evidence that Nitsche was involved, sadly. In fact, all signs point to an anonymous staffer in the US State Department, but the story is a bit more fun than just that.

The IAEA was founded  in 1957 as the UN organization that would try to enact the “Atoms for Peace” plans of the Eisenhower administration. It wasn’t yet the nuclear watchdog organization; that came later, with the Nuclear Non-Proliferation Treaty. Its first head was W. Sterling Cole, a former US Congressman who had been a member of the Joint Committee on Atomic Energy. From pretty much the very beginning, the IAEA started using a little atomic logo on its letterhead:

IAEA letterhead, July 1957

The first instance of this I’ve found is the above, dated July 1957 (though the document was published in August), which is the same time as the IAEA came into being, more or less. By October 1957 they were using a white-on-black approach but it was basically the same thing. An internal IAEA history chalks its creation up to someone within the US State Department or the US Atomic Energy Commission — which is another way of saying, they have no real idea, except that it seems to have come from America. Fair enough, I suppose, though looking at what the Atomic Energy Commission’s own stab at an “Atoms for Peace” logo, I find it really unlikely that they had anything to do with it:

It’s a pretty different aesthetic: that staid AEC atom (perfectly symmetrical), plus a dog’s breakfast of other generic symbols (microscope! medicine! a gear! wheat!). It’s a lousy emblem — it’s messy, it’s generic, and it has finicky details that wouldn’t reproduce well at a small size, which means that it always looks too big.

Anyway, the first IAEA logo, as you can see, was a somewhat informal thing — it’s not as stylized, and its lines aren’t very confident, but the essence of the final emblem is there, hidden within it. It doesn’t have little dots for electrons, and the asymmetry looks only somewhat intentional. They used this up until 1958 without anybody raising any fuss, and without formally adopting it.

But at some point in 1958, someone with just enough education to be dangerous noticed that their little peaceful atom had three electrons. What element has three electrons, typically? Lithium. What’s lithium most associated with, in the area of atomic energy? Hydrogen bombs. Lithium deuteride is the main fusion fuel in hydrogen bombs. When the lithium nucleus absorbs a neutron, it turns into tritium and helium. Tritium and deuterium readily fuse. It’s a nice little reaction — if you’re a weapon designer. If you’re an “Atoms for Peace” agency, it’s a little more problematic. So someone — again, nobody seems to know who — flipped about this. An easy fix was proposed: add another electron! Then it’s no longer lithium… it’s beryllium. Let’s all collectively ignore the fact that beryllium is also used in nuclear weapons, and is also fiendishly toxic, to boot. If they’d just added one more orbital, it would have made boron, which is a neutron absorber that keeps you from getting irradiated — a little more on target, but nobody asked me.

You can see the extra orbital somewhat crudely added to the original emblem in this backdrop at the Second Annual General Conference of the IAEA, from 1958:

They’ve also added little dots for the electrons, too. The version above is somewhat wildly, problematically asymmetrical — the orbitals don’t intersect well in the upper-left corner at all.

Once they started messing with it, though, things got a little out of control. Why stop with just another electron? Now here’s the part where I can clearly see an American governmental influence… they started mucking it up. To quote from that IAEA internal history I referenced before:

Once the process of altering the emblem had started, further suggestions were made and soon a design evolved in which the central circle had been expanded into a global map of the world and five of the eight loops formed by the ellipses contained respectively: a dove of peace with an olive branch; a factory with smoking chimneys and surcharged with a train of three gear wheels; a microscope; two spears of grain; and finally a caduceus, to symbolise respectively the peaceful, industrial, research, agricultural and medicinal uses of atomic energy.2

If that isn’t the most god-awful design-by-committee creation, what is? Another fun fact: they made it gold.

I’d love to show you a version of that one, but I can’t find a copy of it. It sounds wonderfully awful. The helpful folks at the IAEA archive have been unable to track down a drawing of it — at least, within as much energy as they are willing to expend on such a folly, which is understandably limited. I’ve gone over every image I could find from the time period looking for a picture of it. No dice. But, just to have some fun, here is a “creative interpretation” of the above, which I have myself drawn up for you:

IAEA 1958 logo (artist's interpretation)

Ah, but they didn’t stop there. Cole, the IAEA Director General, apparently enjoyed this enough that he had the new emblem printed in gold on a blue flag, and put it up above the United Nations flag outside of the Third General Conference of the IAEA in 1959.

Apparently in UN-world, this was seen as a major scandal. A representative of the UN Secretary General, Dag Hammarskjöld, saw it, flipped out, and had it immediately removed. And it was never seen again. 

Shortly after this flap, it was decided that perhaps they ought to have a formal procedure before creating their emblem. They rolled back all of the modifications except the extra orbital, and cleaned up the layout a bit, and added a set of olive leaves to match the UN flag. On April 1, 1960, this finalized emblem was adopted by the IAEA Board of Governors, in a document that the IAEA archives folks were willing to dig up for me and post online:

INFCIRC/19 - The Agency's Emblem and Seal

As with all things, we’re left with the final product and generally no indication that there was a process to get to it. But, as with all things, there is a process: there is a history. Emblems don’t just pop out of nowhere fully formed, just as institutions, organizations, and policies always have to follow a messy path when coming into existence. The emblem, aside from being a piece of natty graphic design, is one of those typical organizational by-products. Somebody started drawing it, not knowing what it was, and they’ll continue drawing it forever just because… with a slight detour to make it especially ugly after having found a conceptual problem in their original, ad hoc, implementation.

Anyone who has dabbled in graphic design will also recognize this process. You start with half an idea, one imbued with a germ of goodness inside it, somewhere. You try to elaborate on the idea, inevitably making things worse temporarily  Finally, scaling things back, you return to the original, and find that beautiful thing that was hiding in it all along, just out of sight. The snazzy, modern emblem wasn’t achieved on the first go round — it had to pass through design hell before its potential for good could really emerge.

UPDATE (9/2014): The ugly logo has been located! Read all about it here.

Notes
  1. Technically the IAEA is autonomous from the UN though under its aegis. []
  2. Paul C. Szasz, “The Law and Practices of the International Atomic Energy Agency,” [IAEA] Legal Series No. 7 (Vienna: International Atomic Energy Agency, 1970), 1001-1002. []
Redactions

In Search of a Bigger Boom

Wednesday, September 12th, 2012

The scientist Edward Teller, according to one account, kept a blackboard in his office at Los Alamos during World War II with a list of hypothetical nuclear weapons on it. The last item on his list was the largest one he could imagine. The method of “delivery” — weapon-designer jargon for how you get your bomb from here to there, the target — was listed as “Backyard.” As the scientist who related this anecdote explained, “since that particular design would probably kill everyone on Earth, there was no use carting it anywhere.”1

Edward Teller looking particularly Strangelovian. Via the Emilio Segrè Visual Archives, John Wheeler collection.

Teller was an inventive, creative person when it came to imagining new and previously unheard-of weapons. Not all of his ideas panned out, of course, but he rarely let that stop his enthusiasms for them. He was seemingly always in search of a bigger boom. During the Manhattan Project, he quickly tired of working on the “regular” atomic bomb — it just seemed too easy, a problem of engineering, not physics. From as early as 1942 he became obsessed with the idea of a Super bomb — the hydrogen bomb — a weapon of theoretically endless power.

(One side-effect of this at Los Alamos is that Teller passed much of his assigned work on the atomic bomb off to a subordinate: Klaus Fuchs.)

It took over a decade for the hydrogen bomb to come into existence. The reasons for the delay were technical as well as interpersonal. In short, though, Teller’s initial plan — a bomb where you could just ignite an arbitrarily long candle of fusion fuel — wouldn’t work, but it was hard to show that it wouldn’t work. Shortly after abandoning that idea more or less completely, Teller, with the spur from Stan Ulam, came up with a new design.

The Teller-Ulam design allows you to link bombs to bombs to bomb. John Wheeler apparently dubbed this a “sausage” model, because of all of the links. Ted Taylor recounted that from very early on, it was clear you could have theoretically “an infinite number” of sub-bombs connected to make one giant bomb.

A few selected frames from Chuck Hansen’s diagram about multi-stage hydrogen bombs, from his U.S. Nuclear Weapons: A Secret History. Drawing by Mike Wagnon.

The largest nuclear bomb ever detonated as the so-called “Tsar Bomba” of the Soviet Union. On 1961, it was exploded off the island of Novaya Zemlya, well within the Arctic Circle. It had an explosive equivalent to 50 million tons of TNT (megatons). It was only detonated at half-power — the full-sized version would have been 100 megatons. It is thought to have been a three-stage bomb. By contrast, the the largest US bomb ever detonated was at the Castle BRAVO test in 1954, with 15 megatons yield. It was apparently “only” a two-stage bomb.

The dropping of the Tsar Bomba, 1961: an H-bomb the size of a school bus.

We usually talk about the Tsar Bomba as if it represented the absolute biggest boom ever contemplated, and a product of unique Soviet circumstances. We also talk about as if its giant size was completely impractical. Both of these notions are somewhat misleading:

1. The initial estimate for the explosive force of the Super bomb being contemplated during World War II was one equivalent to 100 million tons of TNT. As James Conant wrote to Vannevar Bush in 1944:

It seems that the possibility of inciting a thermonuclear reaction involving heavy hydrogen is somewhat less now than appeared at first sight two years ago. I had an hour’s talk on this subject by the leading theoretical man at [Los Alamos]. The most hopeful procedure is to use tritium (the radioactive isotope of hydrogen made in a pile) as a sort of booster in the reaction, the fission bomb being used as the detonator and the reaction involving the atoms of liquid deuterium being the prime explosive. Such a gadget should produce an explosion equivalent to 100,000,000 tons of TNT.2

Teller was aiming for a Tsar Bomba from the very beginning. Whether they would have supported dropping such a weapon on Hiroshima, were it available, is something worth contemplating.

2. Both the US and the USSR looked into designing 100 megaton warheads that would fit onto ICBMs. The fact that the Tsar Bomba was so large doesn’t mean that such a design had to be so large. (Or that being large necessarily would keep it from being put on the tip of a giant missile.) Neither went forward with these.

A US MK 41 hydrogen bomb.

But remember that the original Tsar Bomba was actually tested at 50 megatons, which was bad enough, right? Both the US and the Soviet Union fielded warheads with maximum yields of 25 megatons. The US Mk-41, of which some 500 were produced, and the Soviet  SS-18 Mod 2 missiles were pretty big booms for everyday use. (The qualitative differences between a 50 megaton weapon and a 25 megaton weapon aren’t that large, because the effects are volumetric.)

3. Far larger weapons were contemplated. By who else? Our friend Edward Teller.

In the summer of 1954, representatives from Los Alamos and the new Livermore lab met with the General Advisory Committee to the U.S. Atomic Energy Commission. Operation Castle had just been conducted and had proven two things: 1. very large (10-15 megaton or so), deliverable hydrogen bombs could be produced with dry fusion fuel; 2. Livermore still couldn’t design successful nuclear weapons.

Norris Bradbury, director of Los Alamos, gave the GAC a little rant on the US’s current “philosophy of weapon design.” The problem, Bradbury argued, was that the US had an attitude of “we don’t know what we want to do but want to be able to do anything.” This was, he felt, “no longer relevant or appropriate.” The answer would be to get very definite specifications as to exactly what kinds of weapons would be most useful for military purposes and to just mass produce a lot of them. He figured that the strategic end of the nuclear scale had been pretty much fleshed out — if you can routinely make easily deliverable warheads with a 3 megaton yield, what else do you need? All diversification, he argued, should be on the lower end of the spectrum: tactical nuclear weapons.

Edward Teller and Enrico Fermi, 1951. Courtesy of the Emilio Segrè Visual Archives.

When Teller met with the GAC, he also pushed for smaller bombs, but he thought there was still plenty of room on the high end of the scale. To be fair, Teller was probably feeling somewhat wounded: Livermore’s one H-bomb design at Castle had been a dud, and the AEC had cancelled another one of his designs that was based on the same principle. So he did what only Edward Teller could do: he tried to raise the ante, to be the bold idea man. Cancel my H-bomb? How about this: he proposed a 10,000 megaton design.

Which is to say, a 10 gigaton design. Which is to say, a bomb that would detonate with an explosive power some 670,000 times the bomb that was dropped on Hiroshima.3

If he was trying to shock the GAC, it worked. From the minutes of the meeting:

Dr. Fisk said he felt the Committee could endorse [Livermore's] small weapon program. He was concerned, however, about Dr. Teller’s 10,000 MT gadget and wondered what fraction of the Laboratory’s effort was being expended on the [deleted]. Mr. Whitman had been shocked by the thought of a 10,000 MT; it would contaminate the earth.4

The “deleted” portion above is probably the names of two of the devices proposed — according to Chuck Hansen, these were GNOMON and SUNDIAL. Things that cast shadows.

The Chairman of the GAC at this time, I.I. Rabi, was no Teller fan (he is reported to have said that “it would have been a better world without Teller”), and no fan of big bombs just for the sake of them. His reaction to Teller’s 10 gigaton proposal?

Dr. Rabi’s reaction was that the talk about this device was an advertising stunt, and not to be taken too seriously.

Don’t listen to Teller, he’s just trying to rile you. Edward Teller: trolling the GAC. A 10,000 megaton weapon, by my estimation, would be powerful enough to set all of New England on fire. Or most of California. Or all of the UK and Ireland. Or all of France. Or all of Germany. Or both North and South Korea. And so on.

“Don’t Fence My Baby In.” Cartoon by Bill Mauldin, Chicago Sun-Times, 1963.

In 1949, Rabi had, along with Enrico Fermi, written up a Minority Annex to the GAC’s report recommending against the creation of the hydrogen bomb. The crux of their argument was thus:

Let it be clearly realized that this is a super weapon; it is in a totally different category from an atomic bomb. The reason for developing such super bombs would be to have the capacity to devastate a vast area with a single bomb. Its use would involve a decision to slaughter a vast number of civilians. We are alarmed as to the possible global effects of the radioactivity generated by the explosion of a few super bombs of conceivable magnitude. If super bombs will work at all, there is no inherent limit in the destructive power that may be attained with them. Therefore, a super bomb might become a weapon of genocide.

If that doesn’t apply to a 10,000 megaton bomb, what does it apply to?

Was Teller serious about the 10 gigaton design? I asked a scientist who worked with Teller back in the day and knew him well. His take: “I don’t doubt that Teller was serious about the 10,000 MT bomb. Until the next enthusiasm took over.” In this sense, perhaps Rabi was right: if we don’t encourage him, he’ll move on to something else. Like hydrogen bombs small enough to fit onto submarine-launched missiles, for example.

It’s hard not to wonder what motivates a man to make bigger and bigger and bigger bombs. Was it a genuine feeling that it would increase American or world security? Or was it just ambition? I’m inclined to see it as the latter, personally: a desire to push the envelope, to push for the bigger impact, the biggest boom — even into the territory of the dangerously absurd, the realm of self-parody.

Notes
  1. Robert Serber, The Los Alamos primer: The first lectures on how to build an atomic bomb (Berkeley: University of California Press, 1992), page 4, fn. 2. []
  2. Letter dated October 20, 1944 from James B. Conant to Vannevar Bush, Subject: Possibilities of a Super Bomb. Vannevar Bush-James B. Conant Files, Records of the Office of Scientific Research & Development, S-1, NARA, Record Group 227, folder 3. Quoted from Chuck Hansen, The swords of Armageddon: U.S. nuclear weapons development since 1945 (Sunnyvale, Calif.: Chukelea Publications, 1995), III-17. []
  3. Actually, if you take the Hiroshima yield to be 15 kilotons, it comes out to a nice round 666,666 times the strength of the Hiroshima bomb. But the precision there seemed arbitrary and the symbolism seemed distracting, so I’m relegating this to just a footnote. []
  4. Minutes of the Forty-First Meeting of the General Advisory Committee to the U.S. Atomic Energy Commission, July 12-15, 1954, on p. 55. []