Posts Tagged ‘1960s’


Would nukes have helped in Vietnam?

Friday, July 25th, 2014

That night I listened while a colonel explained the war in terms of protein. We were a nation of high-protein, meat-eating hunters, while the other guy just ate rice and a few grungy fish heads. We were going to club him to death with our meat; what could you say except, “Colonel, you’re insane”? … Doomsday celebs, technomaniac projectionists; chemicals, gases, lasers, sonic-electric ballbreakers that were still on the boards; and for back-up, deep in all their hearts, there were always the Nukes, they loved to remind you that we had some, “right here in-country.” Once I met a colonel who had a plan to shorten the war by dropping piranha into the paddies of the North. He was talking fish but his dreamy eyes were full of mega-death.1

So wrote Michael Herr in his masterful and classic book of Vietnam War journalism, Dispatches. I recently re-read Herr’s book, and this passage stuck out to me today more than it did when I first read the book a decade ago. “There were always the Nukes…” is an attitude that one sometimes sees expressed in other contexts as well, the idea that if it came to it, the USA could, of course, “glassify” any enemy it so chose to. The bomb in this view is the ultimate guarantor of security and strength. But of course Vietnam, among other conflicts, showed very clearly that being a nuclear state didn’t guarantee victory.2

A napalm attack in the Vietnam War. Source.</a

Napalm in Vietnam. Source.

Would nukes have helped with the Vietnam War? It is a somewhat ghastly idea, to add more slaughter to an already terrible, bloody war, but worth contemplating if only to consider in very tangible terms what nuclear weapons can and can’t do, could and couldn’t do. It was a question that was studied seriously at the time, too. In early 1967, a JASON committee consisting of Freeman Dyson, Robert Gomer, Steven Weinberg, and S. Courtney Wright wrote a 60 page report on “Tactical Nuclear Weapons in Southeast Asia,” considering what could and couldn’t be done with the bomb. The whole thing has been obtained (with redactions) under the Freedom of Information Act by the Nautilus Institute, who have put together a very nice website on the subject under the title “Essentially Annihilated.”3

The motivation for the report, according to Ann Finkbeiner, came from a few of the JASON consultants hearing off-hand comments from military men about the appeal of using a nuke or two:

“We were scared about the possible use in Vietnam,” said Robert Gomer, a chemist from the University of Chicago who was probably Jason’s first nonphysicist. During the 1966 spring meeting Freeman Dyson was “at some Jason party,” he said, and a former chairman of the Joint Chiefs of Staff who was also close to President Johnson “just remarked in an offhand way, ‘Well, it might be a good idea to throw in a nuke once in a while just to keep the other side guessing.’”4

Gomer took initiative on the report, but it is Dyson’s name that is most closely associated it, in part because he (alphabetically) is listed as the first author, in part because Dyson is much more famous. Finkbeiner, who interviewed the authors of the report, says that it was not a report that was specifically requested by the military or government, and that it hewed closely to analytical/tactical questions as opposed to ethical ones.

Which is to say, as you probably have figured out, they set out to show from the start that tactical nuclear weapons would not be a good thing to introduce into the Vietnam War. So they weren’t exactly neutral on the question, but neutrality and objectivity are not the same thing.

1967 - Tactical Nuclear Weapons in Southeast Asia

The report is a fascinating read. It serves as a wonderful lens into how strategic thinking about tactical weapons worked at the time, because the authors, perhaps in an attempt to make sure it was taken seriously, couch all of their reasoning in the language of other, official studies on the issue. So it offers insights into the kinds of issues that were popping up in war-gaming scenarios, and assumptions that were apparently taken as valid about what a tactical nuclear weapon could and couldn’t do. And by deliberately avoiding any discussions of politics and morality (and with that, strategic nuclear weapons use), it does allow them to get into the nitty gritty of the tactical questions without getting overwhelmed by larger and often more nebulous debates about the propriety of nuclear arms.

The basic conclusions are pretty simple. The main one is that even if the US did use tactical nuclear weapons, and such use was entirely unilateral, it wouldn’t get very useful results. Tactical nuclear weapons were thought to be most useful against large massed troops or columns of armor, such as an invading Red Army moving into Western Europe. The problem is, that didn’t describe the situation in Vietnam very well at all, where the Viet Cong and North Vietnamese Army typically operated in smaller groups under forest cover. You could use nukes to destroy their bases, but you’d have to locate their bases first — and by the time you’ve done that, you could have just bombed them conventionally. In general, in a war like Vietnam, tactical nuclear weapons appeared to offer little advantage over conventional arms in most situations. The one special addition of the nukes — the fallout — was too difficult to predict and control, and fallout that would be a useful barrier to troops would necessarily become a problem for civilians as well.

There are some interesting numbers in the report. One is a citation of a conclusion from a RAND study that in a complex war environment, a tactical nuclear weapon is “on the average, equivalent to about 12 nonnuclear attack sorties.” The JASON authors conclude that if you wanted to do something like the Rolling Thunder campaign using nuclear weapons, under this rubric it would require 3,000 tactical nuclear weapons per year. They also note another war-gaming conclusion, that even in the presumedly “Soviet” tactical nuclear weapons environment — large, massed troop and armor concentrations —  “the average number of enemy casualties per strike was about 100.” This probably assumes that some strikes are outright misses while others are very effective, but that’s an impressively low number. The JASON authors note that this would be considerably less in a Vietnam-style environment, because the ability to locate targets of interest would probably be much lower.

There are, they acknowledge, a few cases where specific uses of tactical nuclear weapons might be advantageous. Bridges, headquarters, and underground tunnel complexes could be more easily taken out with tactical nukes than conventional weapons. Such conclusions are somewhat underwhelming, and maybe that is the point: when you do figure out what good the weapons might do, it seems much less impressive than the fantasies.

Map of the Tet Offensive, 1968; the JASON authors would perhaps have us consider what this would have looked like if the North Vietnamese had been supplied tactical weapons from the Soviets or Chinese. Source.

Map of the Tet Offensive, 1968; the JASON authors would perhaps have us consider what this would have looked like if the North Vietnamese had been supplied tactical weapons from the Soviets or Chinese. Source.

The strongest argument they make against using the weapons, though, is not so much that they would be ineffective against the Vietnamese. Rather, it is that the weapons would be really effective against American troops in Vietnam:

If about 100 weapons of 10-KT yield each could be delivered from the base perimeters onto all 70 target areas in a coordinate strike, the U.S. fighting capability in Vietnam would be essentially annihilated. In the more likely contingency that only a few weapons could be delivered intermittently, U.S. casualties would still be extremely high and the degradation of U.S. capabilities would be considerable.

This is often the argument made today whenever the idea of using nuclear weapons — tactical or otherwise — re-raises its head. Since World War II, the US has the strongest interest in not breaking the “nuclear taboo” because once nukes start becoming normalized, the US usually stands to lose the most, or at least a lot. Massed troops, heavy armor, and fixed bases? That’s how we prefer to fight wars. Massive urban cities conveniently located on coasts? Check. Economy highly reliant on communications, transportation, and other infrastructure? Yeah. Which is probably one of the deep reasons that the US, for all of its lack of willingness to commit to a no-first use policy, has always managed to find a way so far to avoid using the tens of thousands of nuclear weapons it produced in the years since Hiroshima and Nagasaki.

The report convincingly concludes:

The use of TNW [tactical nuclear weapons] in Southeast Asia would be highly damaging to the U.S. whether or not the use remains unilateral. The overall result of our study is to confirm the generally held opinion that the use of TNW in Southeast Asia would offer the U.S. no decisive military advantage if the use remained unilateral, and it would have strongly adverse military effects if the enemy were able to use TNW in reply. The military advantages of unilateral use are not overwhelming enough to ensure termination of the war, and they are therefore heavily outweighed by the disadvantages of eventual bilateral use.

When I teach to students, I try to emphasize that there are some deep paradoxes at the core of nuclear weapons policies. Deterrence is a tricky-enough strategic issue, a mixture of  military logic and raw fear. Tactical nuclear weapons add complicated wrinkles. Were they merely a means of making deterrence more credible, by showing the Soviets (and whomever else) that we were not willing to let the threat of nuclear annihilation become paralyzing? Or were they really intended to be military weapons that could be usefully employed, regarded as a sort of scaling up of conventional capabilities? In terms of their doctrine and literature, it isn’t clear: they are spoken of as both, in part because a stated willingness to use them is core to their deterrent value. (That is, if you are going to be convincing in your statements that you are willing to use them, you have to look like you are willing to use them, even if you don’t want to use them.)

How much of tactical nuclear weapons was just swagger? Above, the Davy Crockett weapons system, in full-swagger mode.

How much of tactical nuclear weapons was just swagger? Above, the Davy Crockett weapons system, in full-swagger mode.

Thinking through, in a concrete way, what would happen if nuclear weapons are used, and what the long-term consequences would be (politically, tactically, environmentally, economically, etc.) is an important exercise, even if it is sometimes labeled as morbid. Too often, I think, we close our minds to the very possibility. But “thinking the unthinkable” is valuable — not because it will make us more willing to use them, but because it highlights the limitations of their use, and helps us come to grips with what the actual consequences would be.

So would nuke have been useful in the Vietnam War? I think the JASON authors do a good job of showing that the answer is, “almost certainly not very useful, and possibly completely disastrous.” And knowing, as we do now and they did not in 1967, how much of a long-term blot Vietnam would be to US domestic and foreign policy in the years that followed, consider how much of a danger it would have posed if we had started letting little nukes fly on top of everything else.

  1. Michael Herr, Dispatches (Vintage, 1991 [1977]), 60-61. []
  2. Were they actually “right here in-country”? Apparently not, except on aircraft carriers nearby. Of course moving them into the war theatre would not have likely been very difficult. Still, it is an interesting wrinkle to Herr’s account — the colonels bragging to the journalists, assuming it occurred, was in part just bravado. []
  3. F. Dyson, R. Gomer, S. Weinberg, S.C. Wright, “Tactical Nuclear Weapons in Southeast Asia,” JASON Study S-266 (March 1967), originally posted online at []
  4. Ann Finkbeiner, The Jasons: The Secret History of Science’s Postwar Elite (New York: Viking, 2006), 93. []

Oppenheimer and the Gita

Friday, May 23rd, 2014

What was going through J. Robert Oppenheimer’s head when he saw the great fireball of the Trinity test looming above him? According to his brother, Frank, he only said, “it worked.” But most people know a more poetic account, one in which Oppenheimer says (or at least thinks) the following famous lines:

I remembered the line from the Hindu scripture, the Bhagavad-Gita; Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, “Now I am become Death, the destroyer of worlds.” I suppose we all thought that, one way or another.

This particular version, with a haggard Oppenheimer, was originally filmed for NBC’s 1965 The Decision to Drop the Bomb. I first saw it in Jon Else’s The Day After Trinity (1980), and thanks to YouTube it is now available pretty much anywhere at any time. There are other versions of the quote around — “shatter of worlds” is a common variant — though it did not begin to circulate as part of Los Alamos lore until the late 1940s and especially the 1950s.

It’s a chilling delivery and an evocative quote. The problem is that most of the time when it is invoked, it is done purely for its evocativeness and without any understanding as to what it actually supposed to mean. That’s what I want to talk about: what was Oppenheimer trying to say, presuming he was not just trying to be gnomic? What was he actually alluding to in the Gita?

An Indian greeting card for Diwali from 1998, celebrating India's nuclear tests. Source.

An Indian greeting card for Diwali from 1998, celebrating India’s nuclear tests. Source.

I should say first that I’m no scholar of Hindu theology. Fortunately, many years back, James A. Hijiya of the University of Massachusetts Dartmouth wrote a wonderful article on “The Gita of J. Robert Oppenheimer” that covers all of this topic as well as one might ever want it to be covered.1 Everything I know about the Gita comes from Hijiya’s article — so read it if you want much more discussion of this than I have here. I am particularly fond of Hijiya’s opening line, that Oppenheimer’s paraphrase of the Gita is “one of the most-cited and least-interpreted quotations” of the atomic age.

Oppenheimer was not a Hindu. He was not much of anything, religiously — he was born into a fairly secular Jewish family, embraced the Ethical Culture of Felix Adler, and saw philosophy as more of a boon to his soul than any particular creed. He enjoyed the ideas of the Gita, but he was not religious about it. Hijiya thinks, however, that much can be understood about Oppenheimer’s life through the lens of the Gita as a philosophical and moral code, something necessary in part because Oppenheimer rarely discussed his own internal motivations and feelings about making the bomb. It helps explain, Hijiya argues, that a man who could utter so many public statements about the “sin” and “terror” and “inhumanity” of Hiroshima and Nagasaki could also have been the one who pushed for their use against Japan and who never, ever said that he actually regretted having built the bomb or recommending its use. It helps resolve one of the crucial contradictions, in other words, at the heart of the story of J. Robert Oppenheimer.

J. Robert Oppenheimer, from the Emilio Segrè Visual Archives.

J. Robert Oppenheimer, from the Emilio Segrè Visual Archives.

It’s not clear when Oppenheimer was first exposed to the Gita. I have seen accounts, in oral histories, that suggested that he was spouting Gita lines even while he was a young graduate student studying in Europe. What is definitely known is that he didn’t start studying Sanskrit seriously until 1933, when he started studying with the renown Sanskrit scholar Arthur W. Ryder while he was a professor at Berkeley. In letters, he wrote gushingly about the book to his brother, and much later he quoted from it at the service held at Los Alamos in April 1945 upon the death of President Roosevelt.

The story of the Gita is that of Arjuna, a human prince who has been summoned to a war between princely cousins. Arjuna doesn’t want to fight — not because he lacks courage, or skill, but because it is a war of succession, so his enemies are his own cousins, his friends, his teachers. Arjuna does not want to kill them. He confides in his charioteer, who turns out to be the god Krishna2 in a human form. The text of the Gita is mostly Krishna telling Arjuna why Arjuna must go to war, even if Arjuna does not want to do it.

Krishna’s argument hinges on three points: 1. Arjuna is a soldier, and so it is his job — his duty — to wage war; 2. It is Krishna’s job, not Arjuna’s, to determine Arjuna’s fate; 3. Arjuna must ultimately have faith in Krishna if he is going to preserve his soul.

Arjuna eventually starts to become convinced. He asks Krishna if he will show him his godlike, multi-armed form. Krishna obliges, showing Arjuna an incredible sight:

Krishna revealing himself to Arjuna. Source.

Krishna revealing himself to Arjuna. Source.

A thousand simultaneous suns
     Arising in the sky
Might equal that great radiance,
     With that great glory vie.

Arjuna is awestruck and spellbound:

Amazement entered him; his hair
     Rose up; he bowed his head;
He humbly lifted folded hands,
     And worshipped God. . . .

And then, in his most amazing and terrible form, Krishna tells Arjuna what he, Krishna, is there to do:

Death am I, and my present task

Arjuna, suitably impressed and humbled, then agrees to join in the battle.

The above quotes are from Ryder’s translation of the Gita. You can see that Oppenheimer’s is not especially different from that, even if it is somewhat changed. Personally I find Ryder’s version of the last part more impressive — it is more poetic, more stark. Ryder’s translation, Hijiya explains, is a somewhat idiosyncratic but defensible one. What Ryder (and Oppenheimer) translate as “Death,” others have translated as “Time,” but Hijiya says that Ryder is not alone for calling attention to the fact that in this context the expanse of time was meant to be a deadly one.

If you would like to see the famous “death” verse in the original, it is chapter 11, verse 32 of the Gita, and looks like this:

Gita verse 11:32

This website (from which I got the above) translates it as:

Lord Krsna said: I am terrible time the destroyer of all beings in all worlds, engaged to destroy all beings in this world; of those heroic soldiers presently situated in the opposing army, even without you none will be spared.

While I find Ryder’s more poignant, the longer translation makes it extremely clear what Krishna has in mind. All will perish, eventually. In war, many will perish whether you participate or not. For Oppenheimer and the bomb, this may have seemed especially true. The cities of Hiroshima and Nagasaki (and others on the target list) were on it not because they were necessarily the most important, but because they had so far been spared from firebombing. They were being actively preserved as atomic bomb targets. Had the bomb not been used or made, they probably would have been firebombed anyway. Even if the physicists had refused to make nuclear weapons, the death toll of World War II would hardly have been altered.

Trinity long exposure

“A thousand simultaneous suns”: a long-exposure shot of the Trinity test.

So let’s step back and ask who Oppenheimer is meant to be in this situation. Oppenheimer is not Krishna/Vishnu, not the terrible god, not the “destroyer of worlds” — he is Arjuna, the human prince! He is the one who didn’t really want to kill his brothers, his fellow people. But he has been enjoined to battle by something bigger than himself — physics, fission, the atomic bomb, World War II, what have you — and only at the moment when it truly reveals its nature, the Trinity test, does he fully see why he, a man who hates war, is compelled to battle. It is the bomb that is here for destruction. Oppenheimer is merely the man who is witnessing it. 

Hijiya argues that Oppenheimer’s sense of Gita-inspired “duty” pervades his life and his government service. I’m not sure I am 100% convinced of that. It seems like a heavyweight philosophical solution to the relatively lightweight problem of a life of inconsistency. But it’s an interesting idea. It is perhaps a useful way to think about why Oppenheimer got involved with so many projects that he, at times, seemed ambivalent about. Though ambivalence seemed readily available in those days — nobody seems to be searching for deep scriptural/philosophical justifications for Kenneth Bainbridge’s less eloquent, but equally ambivalent post-Trinity quote: “Now we’re all sons-of-bitches.”

A rare color photograph of Oppenheimer from October 1945, with General Groves and University of California President Robert Sproul, at the Army-Navy "E" Award ceremony. Source.

A rare color photograph of Oppenheimer from October 1945, with General Groves and University of California President Robert Sproul, at the Army-Navy “E” Award ceremony. Source.

One last issue that I find nagging me. We have no recording of Oppenheimer saying this except the 1965 one above. By this time, Oppenheimer is old, stripped of his security clearance, and dying of throat cancer. It is easy to see the clip as especially chilling in this light, given that is being spoken by a fading man. How would it sound, though, if it was coming from a younger, more chipper Oppenheimer, the one we see in photographs from the immediate postwar period? Would it be able to preserve its gravity?

Either way, I think the actual context of the quote within the Gita is far deeper, far more interesting, than the popular understanding of it. It isn’t a case of the “father” of the bomb declaring himself “death, the destroyer of worlds” in a fit of grandiosity or hubris. Rather, it is him being awed by what is being displayed in front of him, confronted with the spectacle of death itself unveiled in front of him, in the world’s most impressive memento mori, and realizing how little and inconsequential he is as a result. Compelled by something cosmic and terrifying, Oppenheimer then reconciles himself to his duty as a prince of physics, and that duty is war.

  1. James A. Hijiya, “The Gita of J. Robert Oppenheimer,” Proceedings of the American Philosophical Society 144, no. 2 (June 2000), 123-167. []
  2. Oppenheimer, in his 1965 interview, identifies the god as Vishnu, perhaps in error. Krishna is an avatar of Vishnu, however, so maybe it is technically correct along some line of thinking. []

Accidents and the bomb

Friday, April 18th, 2014

When I first heard that Eric Schlosser, the investigative journalist was writing a book on nuclear weapons accidents, I have to admit that I was pretty suspicious. I really enjoyed Fast Food Nation when it came out a decade ago. It was one of those books that never quite leaves you. The fact that the smell of McDonald’s French fries was deliberately engineered by food chemists to be maximally appealing, something I learned from Schlosser’s book, comes to mind whenever I smell any French fries. But nuclear weapons are not French fries. When writing about them, it is extremely easy to fall into either an exaggerated alarmism or a naïve acceptance of reassuring official accounts. In my own work, I’m always trying to sort out the truth of the matter, which is usually somewhere in between these two extremes.

Schlosser - Command and Control book

This is especially the case when talking about nuclear weapons accidents — the many times during the Cold War when nuclear weapons were subjected to potentially dangerous circumstances, such as being set on fire, being accidentally dropped from a bomber, crashing with a bomber, having the missile they were attached to explode, and so on. The alarmist accounts generally inflate the danger of the accidents achieving a nuclear yield; the official accounts usually dismiss such danger entirely. There are also often contradictory official accounts — sometimes even the people with clearances can’t agree on whether the weapons in question were “armed” (that is, had intact fissile pits in them), whether the chance of detonation was low or high, and so on. I’ve always been pretty wary about the topic myself for this reason. Sorting out the truth seemed like it would require a lot of work that I wasn’t interested in doing.

Well, I’m happy to report that in his new book, Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of SafetySchlosser has done that work. I reviewed the book recently for Physics Today. You can read my PT review here, but the long and short of it is that I was really, really impressed with the book. And I’m not easily impressed by most works of nuclear weapons history, popular or academic. I’m not surprised it was a finalist for the Pulitzer Prize, either.

Titan II silo complex. There's a lot going on in one of these. This, and all of the other Titan II images in this post, are from Chuck Penson's wonderful, beautiful Titan II Handbook.

Titan II silo complex. There’s a lot going on in one of these. This, and all of the other Titan II images in this post, are from Chuck Penson’s wonderful, beautiful Titan II Handbook.

What I ask out of a new book is that it teach me something new — either novel facts or novel spins on things I already knew about. Schlosser’s book does both. He clearly did his homework when it came to doing the work, and it’s not really surprising it took him about a dissertation’s worth of time to write it. It’s not just a document dump of FOIA’d material, though. He really shines when contextualizing his new information, writing a very rich, synthetic history of nuclear weapons in the Cold War. So the new and the old are woven together in a really spectacular, unusually compelling fashion.

The book has two main threads. One is a very specific, moment-by-moment account of one accident. This is the so-called Damascus Accident, which is when a Titan II missile in Damascus, Arkansas, exploded in its silo in 1980, resulting in one fatality. It’s not one of the “standard” accidents one hears about, like the 1961 Goldsboro bomb, the 1958 Tybee bomb, the 1968 Thule crash, or the 1966 Palomares accident. But Schlosser’s journalist chops here really came through, as he tracked down a huge number of the people involved in the accident and used their memories, along with documentary records, to reconstruct exactly how one dropped spanner — itself just an apparently innocuous, everyday sort of mistake — could lead to such explosive outcomes.

The other thread is a more historical one, looking at the history of nuclear weapons and particular how the problem of command and control runs through it from the beginning. “Command and control” is one of those areas whose vastness I didn’t really appreciate until reading this book. Nominally it is just about making sure that you can use the weapons when you want to, but that also includes making sure that nobody is going to use the weapons when you don’t want them to, and that the weapons themselves aren’t going to do anything terrible accidentally. And this makes it mind-bogglingly complex. It gets into details about communication systems, weapons designs, delivery system designs, nuclear strategy, screening procedures, security procedures, accident avoidance, and so much more.

How do you service a Titan II? Very carefully. This is a RFHCO suit, required for being around the toxic fuel and oxidizer. Not the most comfortable of outfits. From Penson's Titan II Handbook.

How do you service a Titan II? Very carefully. This is a RFHCO suit, required for being around the toxic fuel and oxidizer. Not the most comfortable of outfits. From Penson’s Titan II Handbook.

Schlosser weaves this all together wonderfully. I found very few statements, technical or otherwise, that struck me as genuine outright errors.1 Of course, there are places where there can be differences of interpretation, but there always are. This is pretty good for any book of this length and scope — there are many academic books that I’ve read that had more technical errors than this one.

What I found really wonderful, though, is that Schlosser also managed to give a compelling explanation for the contradictory official accident accounts that I mentioned before. It’s so simple that I don’t know why it never occurred to me before: the people concerned with nuclear weapon safety were not the same people who were in charge of the weapons. That is, the engineers at Sandia who were charged with nuclear safety and surety were institutionally quite remote from the Air Force people who handled the weapons. The Air Force brass believed the weapons were safe and that to suggest otherwise was just civilian hogwash. The engineers who got into the guts of the weapons knew that it was a more complicated story. And they didn’t communicate well — sometimes by design. After awhile the Air Force stopped telling the Sandia engineers about all of the accidents, and so misinformation became rampant even within the classified system.

The fate of the world in a few punched holes. Penson: "Targeting information was stored on Mylar-backed punched paper tape. Though primitive by today's standards, punched paper tape will retain data decades longer than magnetic tapes or CDs. This tape is somewhat worse for wear from 20 years of museum use, but probably would still work."

The fate of the world in a few punched holes. Penson: “Targeting information was stored on Mylar-backed punched paper tape. Though primitive by today’s standards, punched paper tape will retain data decades longer than magnetic tapes or CDs. This tape is somewhat worse for wear from 20 years of museum use, but probably would still work.”

We usually talk about nuclear weapons safety as a question of whether they are “one-point safe.” That is, will the weapon have a chance of a nuclear yield if one point on the chemical explosives surrounding the fission pit detonated inadvertently? Most of the time the answer is no, of course not. Implosion requires a very high degree of detonation symmetry — that’s why it’s hard to make work. So a one-point detonation of the explosive lenses will produce a fizzle, spreading plutonium or uranium like a “dirty bomb” but not producing a supercritical chain reaction.

But some of the time, answer is, “well, maybe.” We usually think of implosions as complex affairs but some weapons only require two-point implosion to begin with. So now you’re no longer talking about the possibility that one out of 36 explosive lenses will go off; you’re talking about one out of two. This isn’t to say that such weapons aren’t one-point safe, just to point out that weapons design isn’t limited to the sorts of things present in the first implosion weapons.

But even this doesn’t really get at the real problem here. “One-point safe” is indeed an important part of the safety question, but not the only one. Consider, for example, what would happen if the firing signal was only a simple amount of DC electrical current. Now imagine that during a fire, the firing circuit board soldering melts and a short-circuit is formed between the batteries and the firing switch. Now the bomb is actually trying to truly set itself off as if it had been deliberately dropped — and full implosion, with nuclear yield, is totally possible.

The injector plate of a Titan II. I thought the somewhat abstract pattern of holes and corrosion on the recovered plate made for a beautiful image. The diagram at left shows you what you are looking at — this is where fuel and oxidizer would come together, propelling the missile.

The injector plate of a Titan II. I thought the somewhat abstract pattern of holes and corrosion on the recovered plate made for a beautiful image. The diagram at left shows you what you are looking at — this is where fuel and oxidizer would come together, propelling the missile.

How likely is this kind of electrically-activated nuke scenario? What the Sandia engineers discovered was that in some weapons it was really not implausible at all. Under the “abnormal environment” of a weapons accident (such as a crashing or burning B-52), all sorts of crazy things could happen with electronic circuits. And unless they were really carefully designed for the possibility of this kind of accident, they could arm themselves and fire themselves. Which is the kind of thing you’d expect an engineer who is deeply connected with the electrical technology of the bomb to conclude.

And of course, as Schlosser (and his engineer sources) point out — this kind of thing is only one small detail in the broad, broad question of nuclear safety. These systems are big, complex, and non-linear. And so much hinges on them working correctly.

The sociologist of science Donald MacKenzie has proposed (in a slightly different context — nuclear weapons accuracy, not safety) that a “certainty trough” exists with regards to complex questions of technological uncertainty. He draws it somewhat like this:2

MacKenzie's Certainty Trough

So this divides people into three groups. On the left are the people who actually build the technology and the knowledge. These people have reasonably high levels of uncertainty about the technology in question — they know the nuts and bolts of how it works and how it could go wrong. (I’ve added “confidence” as a label because I find it more straightforward than “uncertainty” at times.) They also know what kinds of failure situations are not likely as well. In the middle, you have people who are completely committed to the technology in question. These people aren’t completely divorced from solid knowledge about it, but they are just consumers of knowledge. They look at the final data, but they don’t really know how the data was made (and all of the uncertainty that gets screened out to make the final version of the data). They have very low uncertainty, and so very high confidence in the technology. At far right you have the people who are either total outsiders, or people who are totally committed to another approach. These have the highest levels of uncertainty and the lowest levels of confidence.

So if we were mapping Schlosser’s actors onto these categories, we’d have the Sandia engineers and other weapons scientists on the far left. They know what can go wrong, they know the limits of their knowledge. They also know which accident situations are outlandish. In the middle we have the military brass and even the military handlers of the weapons. They are committed to the weapons. They have data saying the weapons are safe — but they don’t know how the data was made, or how it was filtered. They think the weapons are totally safe and that anyone who suggests otherwise is just ignorant or foolish. And lastly, at far right, we have total outsiders (the activists, perhaps, or sometimes even politicians), or people who really are looking to amplify the uncertainty for their own purposes.

Titan II Launch Control Center, with the facilities console at center. From Penson.

Titan II Launch Control Center, with the facilities console at center. From Penson.

The disconnect between the far left group and the middle group is the one that disturbs me the most in Schlosser’s account. It also reflects what I’ve seen in online discussions of weapons accidents. People with a little bit of knowledge — e.g. they know about one-point safety, or they once handled nukes in the military — have very high confidence in the safety issues. But they don’t know enough to realize that under the hood, things are more complicated and have been, in the past at least, much more dangerous. Not, perhaps, as dangerous as some of the more alarmist, outsider, activist accounts have stressed. But dangerous enough to seriously concern people whose jobs it is to design the weapons — people who know about the nuts and bolts of them.

Anyway. Schlosser’s book is a great read, as well. Which it needs to be, because it is long. But it’s also segregable. Don’t care much of the details of the Damascus accident? You can skip those sections and still get a lot out of the book (even though the Damascus accident is really a perfect account of all of the little things that can go wrong with complex, non-linear systems). But part of that length is a copious amount of endnotes, which I applaud him and his publisher for including. For a book like this, you can’t skimp on the documentation, and Schlosser doesn’t. The only thing he did skimp on was illustration, which I — as a pretty visual guy — thought was too bad. So much of the Damascus story takes place inside of a Titan II silo, and while the inner flap of the cover did have a simplified illustration of one, I still felt like I didn’t really know what was happening where at times. (I wonder if this was a trade-off with the publisher in having so many notes and pages.)

Chuck Penson's Titan II Handbook, and one of its several amazing fold-out diagrams. Adorable pupper (Lyndon) for scale.

Chuck Penson’s Titan II Handbook, and one of its several amazing fold-out diagrams. Adorable pupper (Lyndon) included for scale.

Fortunately, there is a solution for this. If it were up to me, every copy of Schlosser’s book would be accompanied by a copy of Chuck Penson’s Titan II Handbook: A civilian’s guide to the most powerful ICBM America ever built. Penson’s book is a richly illustrated history of this particular missile, and contains lots of detailed photographs and accounts of daily life on a Titan II base (such as those seen above) It’s utterly fascinating and it gives so much visual life to what Schlosser describes. It also includes giant fold-out diagrams of the missiles themselves — the printing quality is really impressive all around. It includes fascinating technical details as well. For example, in the early days of the Titan II silos they had large motor-generators that constantly ran in case they needed to convert DC power into AC in the event of a failure of commercial power. Penson then notes that:

The motor-generator ran with a loud, monotonous high-pitched whine… noise in the [Launch Control Center] turned into a serious issue. Crew members complained of temporary hearing loss due not only the incessant buzz of the motor-generator, but also to the constant drone of the air conditions, fans and blowers in equipment. Eventually the Air Force covered the tile floor with carpeting, and acoustic batting was hung in the in the area of the stairway leading up to level 1 and down to level 3. … These changes made a tremendous improvement, but one that came too late for many of the crew, a significant number of whom now need hearing aids.

This kind of detail fits in perfectly with Schlosser’s approach to the facility, which itself seems strongly influenced by the sociologist Charles Perrow’s notion of “Normal Accidents.” That the devices in the facility would affect the hearing of the crew was certainly not something that anybody thought of ahead of time; it’s one of those little details that gets lost in the overall planning, but (at least for those who suffered the hearing loss) had real consequences. Ultimately this is the thesis of Schlosser’s book: that the infrastructure of nuclear command and control is much larger, much more complex, much more problematic than most people realize, and is one of those high-complexity, high-risk systems that human beings are notoriously pretty bad at managing.

If you’re the kind of person who geeks out on nuke history, both Schlosser’s and Penson’s books are must-reads, must-buys.

  1. The two biggest mistakes I noted, which I’ve told Schlosser about and may be fixed in the paperback, are that he misstates the size of the neutron initiator in the Fat Man bomb — he confuses the diameter for the radius — and he got the story of Szilard’s 1933 chain reaction work wrong, which lots of people do. Szilard’s patent is such a common source of misunderstanding even amongst scholars that I will be writing a blog post about it soon. Neither of these are terribly important to his argument or narrative. []
  2. Adapted from Donald MacKenzie, Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance (Cambridge, Mass.: MIT Press, 1990), figure 7.2. []

Kilotons per kilogram

Monday, December 23rd, 2013

Nuclear weapons can be made to have pretty much as much of a bang as one wants to make them, but with increased explosive yield comes an increased weapon weight. We always talk vaguely about being able to make H-bombs to arbitrarily high yields, but recently I’ve been mulling over this fact somewhat quantitatively. I gave a talk last month at the History of Science Society Meeting on US interest in 50-100 MT bombs around the time of the Limited Test Ban Treaty, and while working on this paper I got  slightly obsessed with what is known as the yield-to-weight ratio.

Little Boy — a big bang compared to a conventional bomb, but still a very crude nuclear bomb.

Little Boy — a big bang compared to a conventional bomb, but still a very crude nuclear bomb.

What makes nuclear weapons impressive and terrible is that their default yield-to-weight ratio — that is, the amount of bang per mass, usually expressed in terms of kilotons per kilogram (kt/kg) — is much, much higher than conventional explosives. Take TNT for example. A ton of TNT weighs, well, a ton. By definition. So that’s 0.001 kilotons per 1,000 kilograms; or 0.000001 kt/kg. By comparison, even a crude weapon like the Little Boy bomb that was dropped on Hiroshima was about 15 kilotons in a 4,400 kg package: 0.003 kt/kg. That means that the Little Boy bomb had an energy density three orders of magnitude higher than a regular TNT bomb would. Now, TNT isn’t the be-all and end-all of conventional explosives, but no conventional explosive gets that much boom for its buck compared to a nuke.

The Little Boy yield is much lower than the hypothetical energy density of uranium-235. For every kilogram of uranium-235 that completely fissions, it releases about 17 kt/kg. That means that less than a kilogram of uranium-235 fissioned in the Little Boy bomb to release its 15 kilotons of energy. Knowing that there was 64 kg of uranium in the bomb, that means that something like 1.3% of the uranium in the weapon actually underwent fission. So right off the bat, one could intuit that this is something that could probably be improved upon.

Fat Man — a lot better use of fissile material than Little Boy, but no more efficient in terms of yield-to-weight.

Fat Man — a lot better use of fissile material than Little Boy, but no more efficient in terms of yield-to-weight.

The Fat Man bomb had a much better use of fissile material than Little Boy. Its yield wasn’t that much better (around 20 kilotons), but it managed to squeeze that (literally) out of only 6.2 kilograms of plutonium-239. Pu-239 releases around 19 kilotons per kilogram that completely fissions, so that means that around 15% of the Fat Man core (a little under 1 kg of plutonium) underwent fission. But the bomb itself still weighed 4,700 kg, making its yield-to-weight ratio a mere 0.004 kt/kg. Why, despite the improve efficiency and more advanced design of Fat Man, was the yield ratio almost identical to Little Boy? Because in order to get that 1 kg of fissioning, it required a very heavy apparatus. The explosive lenses weighed something like 2,400 kilograms just by themselves. The depleted uranium tamper that held the core together and reflected neutrons added another 120 kilograms.  The aluminum sphere that held the whole apparatus together weighed 520 kilograms. The ballistic case (a necessary thing for any actual weapon!) weighed another 1,400 kg or so. All of these things were necessary to make the bomb either work, or be a droppable bomb.

So it’s unsurprising to learn that improving yield-to-weight ratios was a high order of business in the postwar nuclear program. Thermonuclear fusion ups the ante quite a bit. Lithium-deuteride (LiD), the most common and usable fusion fuel, yields 50 kilotons for every kilogram that undergoes fusion — so fusion is nearly 3 times more energetic per weight than fission. So the more fusion you add to a weapon, the better the yield-to-weight ratio, excepting for the fact that all fusion weapons require a fission primary and usually also have very heavy tampers.

I took all of the reported American nuclear weapon weights and yields from Carey Sublette’s always-useful website, put them into the statistical analysis program R, and created this semi-crazy-looking graph of American yield-to-weight ratios:

Yield-to-weight ratios of US nuclear weapons

The horizontal (x) axis is the yield in kilotons (on a logarithmic scale), the vertical (y) axis is the weight in kilograms (also on a log scale). In choosing which of the weights and yields to use, I’ve always picked the lowest listed weights and the highest listed yields — because I’m interested in the optimal state of the art. The individual scatter points represent models of weapons. The size of each point represents how many of them were produced; the color of them represents when they were first deployed. Those with crosses over them are still in the stockpile. The diagonal lines indicate specific yield-to-weight ratio regions.

A few points of interest here. You can see Little Boy (Mk-1), Fat Man (Mk-3), and the postwar Fat Man improvements (Mk-4 — same weight, bigger yield) at the upper left, between 0.01 kt/kg and 0.001 kt/kg. This is a nice benchmark for fairly inefficient fission weapons. At upper right, you can see the cluster of the first H-bomb designs (TX-16, EC-17, Mk-17, EC-24, Mk-24) — high yield (hence far to the right), but very heavy (hence very high). Again, a good benchmark for first generation high-yield thermonuclear weapons.

What a chart like this lets you do, then, is start to think in a really visual and somewhat quantitative way about the sophistication of late nuclear weapon designs. You can see quite readily, for example, that radical reductions in weight, like the sort required to make small tactical nuclear weapons, generally results in a real decrease in efficiency. Those are the weapons in the lower left corner, pretty much the only weapons in the Little Boy/Fat Man efficiency range (or worse). One can also see that there are a few general trends in design development over time if one looks at how the colors trend.

First there is a movement down and to the right (less weight, more yield — improved fission bombs); there is also a movement sharply up and to the right (high weight, very high yield — thermonuclear weapons) which then moves down and to the left again (high yield, lower weight — improved thermonuclear weapons). There is also the splinter of low-weight, low-yield tactical weapons as well that jots off to the lower left. In the middle-right is what appears to be a sophisticated “sweet spot,” the place where all US weapons currently in the stockpile end up, in the 0.1-3 kt/kg range, especially the 2-3 kt/kg range:

Yield-to-weight ratios -- trends

These are the bombs like the W-76 or the B-61 — bombs with “medium” yield warheads (100s rather than 1,000s of kilotons) in relatively low weight packages (100s rather than 1000s of kilograms). These are the weapons take advantage of the fact that they are expected to be relatively accurate (and thus don’t need to be in the multi-megaton range to have strategic implications), along with what are apparently sophisticated thermonuclear design tricks (like spherical secondaries) to squeeze a lot of energy out of what is a relatively small amount of material. Take the W-76 for example: its manages to get 100 kilotons of yield out of 164 kilograms. If we assume that it is a 50/50 fission to fusion ratio, that means that it manages to fully fission about 5 kilograms of fissionable material, and to fully fuse about 2 kilograms of fusionable material. And it takes just 157 kg of other apparatus (and unfissioned or unfused material) to produce that result — which is just a little more than Shaquille O’Neal weighs.

Such weapons aren’t the most efficient. Weapon designer Theodore Taylor wrote in 1987 that 6 kiloton/kilogram had been pretty much the upper limit of what had even been achieved.1 Only a handful of weapons got close to that. The most efficient weapon in the US stockpile was the Mk-41, a ridiculously high yield weapon (25 megatons) that made up for its weight with a lot of fusion energy.

The components of the B-61 nuclear weapon — the warhead is the bullet-shape in the mid-left. The B-61 was designed for flexibility, not miniaturization, but it's still impressive that it could get 20X the Hiroshima bomb's output out of that garbage-can sized warhead.

The components of the B-61 nuclear weapon — the warhead is the bullet-shape in the mid-left. The B-61 was designed for flexibility, not miniaturization, but it’s still impressive that it could get 20X the Hiroshima bomb’s output out of that garbage-can sized warhead.

But given that high efficiency is tied to high yields — and relatively high weights — it’s clear that the innovations that allowed for the placing of warheads on MIRVed, submarine-launched platforms are still pretty impressive. The really magical range seems to be for weapons that in the hundred kiloton range (more than 100 kilotons but under a megaton), yet under 1,000 kilograms. Every one of those dates from after 1962, and probably involves the real breakthroughs in warhead design that were first used with the Operation Dominic  test series (1962). This is the kind of strategic miniaturization that makes war planners happy.

What’s the payoff of thinking about these kinds of numbers? One is that it allows you to see where innovations have been made, even if you know nothing about how the weapon works. In other words, yield-to-weight ratios can provide a heuristic for making sense of nuclear design sophistication, comparing developments over time without caring about the guts of the weapon itself. It also allows you to make cross-national comparisons in the same fashion. The French nuclear arsenal apparently developed weapons in that same miniaturized yield-to-weight range of the United States by the 1970s — apparently with some help from the United States — and so we can probably assume that they know whatever the United States figured out about miniaturized H-bomb design in the 1960s.

The Tsar Bomba: a whole lot of boom, but a whole lot of weight. The US thought they could make the same amount of boom for half the weight.

The Tsar Bomba: a whole lot of boom, but a whole lot of weight. The US thought they could make the same amount of boom for half the weight.

Or, to take another tack, and returning to the initial impetus for me looking at this topic, we know that the famous “Tsar Bomba” of the Soviet Union weighed 27,000 kilograms and had a maximum yield of 100 Mt, giving it a yield-to-weight ratio of “only” 3.43 kilotons/kilograms. That’s pretty high, but not for a weapon that used so much fusion energy. It was clear to the Atomic Energy Commission that the Soviets had just scaled up a traditional H-bomb design and had not developed any new tricks. By contrast, the US was confident in 1961 that they could make a 100 Mt weapon that weighed around 13,600 kg (30,000 lb) — an impressive 7.35 kiloton/kilogram ratio, something well above the 6 kt/kg achieved maximum. By 1962, after the Dominic series, they thought they might be able to pull off 50 Mt in only a 4,500 kg (10,000 lb) package — a kind of ridiculous 11 kt/kg ratio. (In this estimate, they noted that the weapon might have an impractically large diameter as a result, perhaps because the secondary was spherical as opposed to cylindrical.) So we can see, without really knowing much about the US had in mind, that it was planning something very, very different from what the Soviets set off.

It’s this black box approach that I find so interesting about these ratios. It’s a crude tool, to be sure, but a tool nonetheless. By looking at the broad trends, we get insights into the specifics, and peel back the veil just a tiny bit.

  1. Theodore B. Taylor, “Third Generation Nuclear Weapons,” Scientific American 256, No. 4 (April 1987), 30-39, on 34: “The yield-to-weight ratios of pure fission warheads have ranged from a low of about .0005 kiloton per kilogram to a high of about .1 kiloton per kilogram. [...] The overall yield-to-weight ratio of strategic thermonuclear warheads has been as high as about six kilotons per kilogram. Although the maximum theoretical ratios are 17 and 50 kilotons per kilogram respectively for fission and fusion reactions, the maximum yield-to-weight ratio for U.S. weapons has probably come close to the practical limit owing to various unavoidable inefficiencies in nuclear weapon design (primarily arising from the fact that it is impossible to keep the weapon from disintegrating before complete fission or fusion of the nuclear explosive has taken place.” []

Art, Destruction, Entropy

Friday, December 13th, 2013

Are nuclear explosions art? Anyone who has taken even a glance into modern and contemporary art knows that the official mantra might as well be “anything goes,” but I found myself wondering this while visiting the exhibition “Damage Control: Art and Destruction since 1950” that is currently at the Hirshhorn Museum. The first thing one sees upon entering is a juxtaposition of two very different sorts of “work.” On the right is a fairly long loop of EG&G footage of nuclear test explosions, broadcast in high definition over an entirety of a wall. On the left is a piano that has been destroyed with an axe. This, I thought, is at least a provocative way to start things off.

Edgerton, Germeshausen, and Grier (EG&G) was a contractor for the federal government during the Cold War, responsible for documenting nuclear test explosions. Quite a lot of the famous Cold War nuclear detonation footage was taken by EG&G. They are perhaps most famous for their “Rapatronic” photographs, the ultimate expression of MIT engineer Harold “Doc” Edgerton’s work of slowing down time through photography, but this was only a part of their overall contribution. The film they have at the Hirshhorn is something of an EG&G “greatest hits” reel from the 1950s, and its affect on the others in the audience was palpable. Adults and children alike were drawn to the blasts, displayed one after another without commentary or explanation.1 Their reactions didn’t strike me as one of disgust or horror, but of amazement and awe. Most of the footage was from the Nevada Test Site, so the bombs were generally just blowing up desert scrub, and occasionally houses constructed for effects testing.

The destroyed piano, by contrast, got reactions of shock and disgust. It was the remains of a piece of performance art conducted by Raphael Montañez Ortiz, one of several he’s done, apparently. My wife, a piano player and a nuclear historian, also found it disturbing. “If you know what goes into making a piano…,” she started to say. “But then again, if you know what goes into making a city…,” she caught herself. I overheard other people say similar things.

The difference in reactions isn’t too surprising — it’s a common theme that it is easy to appreciate the destruction of something at a human scale, difficult to appreciate it at the scale of nuclear bomb. A lot of what I’ve spent time doing, with the NUKEMAP and my writing, is to try to understand, and to impart, the scale of a nuclear explosion. A lot of this has involved looking at the attempts of others, as well, from official Cold War visualizations made for secret committees to popular films, as they have tried to communicate this to their target audiences. The hardest thing is that our brains appear only to be wired for empathy at the individual level, and don’t readily apply it to large groups or large areas. The best work in these areas conveys both the broad scope of destruction, but then ties it into the personal. They individualize the experience of mass ruination.

And the EG&G footage isn’t trying to do that. It was data meant for very specific technical purposes. It was developed in order to further the US nuclear program, and defense against Soviet nuclear weapons. Which is why I somewhat question its inclusion, or, at least, its decontextualization. It is art only in the sense that it has aesthetics and it has been put into an art gallery. One can read into it whatever one wants, of course, but it wasn’t created to have deep meaning and depth in that sense. (Whether one cares about authorial intention, of course, is its own can of modern art worms.) Just as a small example of what I mean, Andy Warhol famously made a print of mushroom clouds for his own “disaster” series (a few of which, but not this print, were featured in the exhibit):

"Atomic Bomb," Andy Warhol, 1965.

“Atomic Bomb,” Andy Warhol, 1965.

Now Warhol is a complicated character, but since he was explicitly an artist I think it is always fair game to talk about his possible intentions, the aesthetics of the piece, the deeper meanings, and so on. Warhol’s art has generally been interpreted to be about commercialization and commodification. The mushroom cloud in repetition becomes a statement about our culture and its fascination with mass destruction, perhaps. Coming in the mid-1960s, after the close-call terrors of the early years of the decade, perhaps it was maybe too-little too-late, but still, it has an ominous aesthetic appeal, perhaps now more than then.

Because I don’t think this image was widely circulated at the time, I doubt that Warhol knew that Berlyn Brixner, the Trinity test photographer, had made very similar sorts of images of the world’s first nuclear fireball at “Trinity”:

TR-NN-11, Berlyn Brixner, 1945.

“TR-NN-11,” Berlyn Brixner, 1945.

Brixner appreciated the aesthetics and craft of his work, to be sure. But the above photograph is explicitly a piece of technical data. It is designed to show the Trinity fireball’s evolution over the 15-26 millisecond range. Warhol’s instrument of choice was the silkscreen printer; Brixner’s was the 10,000 fps “Fastax” camera. There’s a superficial similarity in their atomic repetition. You could make a statement by putting them next to each other — as I am doing here! — but properly understood, I think, they are quite different sorts of works.

Don’t get me wrong. Re-appropriating “non-art” into “art” has been a common move over much of the 20th century at the very least. But the problem for me is not that people shouldn’t appreciate the aesthetics of the “non-art.” It’s that focusing on the aesthetics makes it easy to lose sight of the context. (As well as the craft — Brixner’s work was exponentially more difficult to produce than Warhol’s!) The EG&G footage in the exhibit doesn’t explain much of how, or why, it was made. It seems to be asking the viewer to appreciate it solely on its aesthetic grounds. Which I think is the real problem. Many of the tests they show resulted in significant downwind fallout for the populations near the Nevada Test Site. Many of them involved the development of new, ever-more elaborate ways of mass destruction. Many of them were the product of years of top scientific manpower, untold riches, and a deep political context. To appreciate them as simply big, bright booms robs them of something — no matter how aesthetically beautiful those big, bright booms actually are. 

Gustav Metzger's "auto-destructive" art.

Gustav Metzger’s “auto-destructive” art.

What makes it more ironic is that the exhibit actually does give considerable context to some of the works that are explicitly “art.” You have to explain the context of Gustav Metzger’s “auto-destructive” art — it involves him filming himself painting on canvases with a strong acid, so the artwork destroys itself in the process. Without the context there, what is left is just a boring, not-very-comprehensible movie of a man destroying a blank canvas. But anyway.

In terms of the audience at the exhibit, which was fairly well-attended when I was there with my wife, the most interesting part was the handling of children. The Smithsonian museums are of course explicitly places that people take their children while visiting the city, so it’s no surprise that you probably find more of them at the Hirshhorn than you would at MOMA or other similar institutions. But children add a level of anxiety to an exhibit about destruction. They were wowed by the wall-o’-bombs but not, it seemed, by the piano. Parents seemed to let them wander free through most of it, but there were several films where I saw kids get yanked out by their parents once the parents realized the content was going to be disturbing. In one of these films, the “disturbing” content was of a variety that might have been hard for the children to directly understand — the famous film of the Hindenburg going up in flame, for example, where the violence was real but seen from enough of a distance to keep you from seeing actual injuries or bodies. The one I saw the kids getting really removed from (by their parents, not the museum) was footage of the 2011 Vancouver riots. I wasn’t impressed too much with the footage itself (its content was interesting in a voyeuristic way, but there seemed to be nothing special about the filming or editing), but the immediacy of its violence was much more palpable than the violence-at-a-distance that one saw in most of the other such works. It’s cliche to trot out that old quote attributed (probably wrongly) to Stalin that one death is a tragedy, a million is a statistic, but there’s something deeply true to it about how we perceive violence and pain.

Damage Control exhibit site

There are a lot of works in the exhibit. As one would expect, some hew to the theme very closely, some are a bit more tenuous. Overall, though, it was pretty interesting, and if you’re in town, you ought to check it out. The original comment my wife made about pianos and cities stuck with me as I looked at all of the various meditations on “destruction.” In it, I kept coming back to the second law of thermodynamics. On the face of it, it is a very clinical, statistical law: “the entropy of an isolated system never decreases.” It is actually quite profound, something that the 19th-century physicists who developed it knew. Entropy can be broadly understood as “disorder.” The second law of thermodynamics says, in essence, that without additional energy being put into it, everything eventually falls apart. It takes work to keep things “organized,” whether they are apartments, bodies, or cities.2 Ludwig Boltzmann, who helped formulate the law, stated gnomically in 1886 that:

The general struggle for existence of animate beings is not a struggle for raw materials – these, for organisms, are air, water and soil, all abundantly available – nor for energy, which exists in plenty in any body in the form of heat Q, but of a struggle for [negative] entropy, which becomes available through the transition of energy from the hot sun to the cold earth.

In other words, life itself is a struggle against entropy. Our bodies are constantly taking disordered parts of the world (heat energy, for example, and the remains of other living things) and using them to fuel the work of keeping us from falling apart.

But the other way to think about this law is that generally it is easier to take things apart than it is to keep them together. It is easier to convert a piano into a low-energy state (through an axe, or perhaps a fire) than it is to make a piano in the first place. It is easier to destroy a city than it is to make a city. The three-year effort of the half-a-million people on the Manhattan Project was substantial, to be sure, but still only a fraction of the work it took to make the cities of Hiroshima and Nagasaki, and all that they contained, biological and material, in the first place.

Of course, the speed at which entropy increases is often controllable. The universe will eventually wear out — but not for a long time. Human civilization will necessarily go extinct — but it doesn’t have to happen anytime soon. What hits home with the “Damage Control” exhibit is how we as a species have to work so hard to keep everything together, while simultaneously working so hard to find ways to make everything fall apart. And in this, perhaps, it is a success, even if I left with many niggling questions about the presentation of some of the works in particular.

  1. Various guys in the audience would occasionally try to give explanation to their loved ones, and they were generally incorrect, alas. “That must be at Alamogordo… That’s got to be an H-bomb…” no, no, no. Of course, I was there with my wife, and I was talking up my own little storm (though less loudly than the wrong guys), but at least I know my stuff for the most part… []
  2. The key, confusing part about the second law is the bit about the “isolated system.” It doesn’t say that entropy always increases. It says that in an isolated system — that is, a system with no energy being input into it — entropy always increases. For our planet, the Sun is the source of that input, and you can trace, through a long series of events, its own negative entropy to the Big Bang itself. []