Deterrence theory is one of those ideas that seems pretty easy at first glance but gets more deeply muddled on closer inspection. I have a bomb, you have a bomb, thus we won’t bomb each other, right? If only it were so easy.
The historian Spencer Weart has a great passage on what he calls the “insoluble paradox in deterrence theory” in his book, Nuclear Fear, which doesn’t seem to have been included in its recent reissue, The Rise of Nuclear Fear. It’s one of my favorite little bits of the book, though:
From the 1950s on the sharpest analysts left ambiguities, internal contradictions, and blind leaps of logic in their writings. Most writers changed their position from one year to the next and sometimes, it seemed, from one page to the next.
An example of the muddle was the failure of most writers to define clearly even the key term “deterrence.” Sometimes it meant, as the French translated the term, “dissuasion.” That meant arranging things so that enemies would deduce, like chess players, that they should not launch an attack because it was clear they would not win the game. Other times deterrence meant what the Russian translation frankly called “terrorization,” which did not address the intellect at all. Of course, military logic on the one hand or an appeal to raw fear on the other might well require different strategies and even different hardware. But most thinkers mixed the two approaches, evading refutation in one mode of thought by shifting indiscriminately to the other.
Ever since I read that, I was taken with the idea that the French and the Russians translated this supposedly simple English word — “deterrence” — quite differently. For the French, it is dissuasion nucléaire, a high-minded, philosophe-style expression of modern rationality. For the Russians, they often used ustrasheniye (устрашение), a word which invokes terror and horror and dread.
Where Weart sees a muddle of expression, though, I see an ambivalence of concept. Deterrence fully understood requires both of these meanings. Is about rational actors, game theory, and logical persuasion — but the method of persuasion is threatening to burn everybody alive. It’s about nations being rationally terrified of each other’s capabilities.
This fundamental ambivalence of concept shoots through all of our cultural depictions of deterrence, as well. It’s not a surprise that most of the defense intellectuals depicted in books and films are simultaneously both of these things. Dr. Strangelove is of course the canonical, genre-defining case: coldly rational, but also completely psychotic.
There are lots of idealized representations of deterrence. Often it is talked about it as if it were a “standoff” situation, with two pistoleros holding guns to one another’s heads. Visually these make sense: we’re talking about two superpowers, each ideally with a second-strike capability, so that if one attacks the other, the other has time to counter-attack. Ergo, nobody will attack — out of self-interest.
But when we transpose the metaphor to reality, things get complicated. There’s not just two people with guns. Each “person” is really an entire nation. The decision-making capabilities are not located in one brain or one set of sensory organs, but distributed over thousands of miles and thousands of human beings, each looking at the situation through quite different lenses (literally and figuratively). The guns may not themselves be evenly matched — one side may have the ability to strike faster, or deadlier, than the other. One side may be convinced they can “ride out” an attack better than the other. One side may have higher or lower confidence in their own capabilities, or the capabilities of the opposite. And so on.
Self-interest itself may not be evenly distributed. Was the U.S. President, or the Soviet Premier, personally threatened by nuclear war at all times in the Cold War? Do the people with their hands on the button have a personal stake in it? Do they have a bunker under a mountain to hide in, with their families? What is really being threatened in such a situation is an entire nation, but not necessarily the individual who has their hand on “the button.” In financial terms this potentially runs the risk of being a “moral hazard” — the equivalent gambling with someone else’s money. Of course, these people have loved ones, too, and not everyone can fit under that mountain.
The physicist and Nobel Prize winner Owen Chamberlain once proposed an improvement to deterrence based largely on reducing the possibility of a moral hazard, though he didn’t put it in these terms. He wrote to the president of the Federation of American Scientists in the mid-1980s with the following suggestion:
The idea I want to have looked over is this:
The 200 most important political and military persons in each superpower should be required to provide one family member who could act as a hostage by living inside the other superpower. Thus, every powerful politician or general would have one family member.
I claim this might be arranged easily, is really quite inexpensive, and I believe it has the potential of putting the world in a different frame of mind. It might make nuclear war seem out-of-the-question to all.
The hostages—maybe one can find better word—could be children or grandchildren or perhaps nephews and nieces. We could afford to have excellent schooling for the hostages, for the number involved would be very moderate.
I admit is a gimmick. However, it seems to me to be a gimmick with more than the usual protection for the dollar.
In essence, moral hazard is avoided if everyone has some skin in the game — especially the people who have their fingers on the metaphorical buttons.
It’s a gimmick, as Chamberlain admitted. But it’s a fairly profound one: the idea of “hostages” sounds abhorrent (even he dislikes the word) until you realize that in the “normal” deterrence situation, we — members of the non-button-pusher classes — are already hostages. That’s the real beauty of Chamberlain’s idea: he’s taking the existing situation, where all of the children and grandchildren and nieces and nephews are already threatened by nuclear war, and proposing to make it explicit and unignorable for those in positions of influence.
On a similar vein, Stephen Schwartz passed on this amazing suggestion that the late Roger Fisher made in the March 1981 issue of the Bulletin of the Atomic Scientists, again attempting to bring the personal back into the often coldly “rational” logic of nuclear warfare:
There is a young man, probably a Navy officer, who accompanies the President. This young man has a black attaché case which contains the codes that are needed to fire nuclear weapons. I could see the President at a staff meeting considering nuclear war as an abstract question. He might conclude: “On SIOP Plan One, the decision is affirmative, Communicate the Alpha line XYZ.” Such jargon holds what is involved at a distance.
My suggestion was quite simple: Put that needed code number in a little capsule, and then implant that capsule right next to the heart of a volunteer. The volunteer would carry with him a big, heavy butcher knife as he accompanied the President. If ever the President wanted to fire nuclear weapons, the only way he could do so would be for him first, with his own hands, to kill one human being. The President says, “George, I’m sorry but tens of millions must die.” He has to look at someone and realize what death is—what an innocent death is. Blood on the White House carpet. It’s reality brought home.
When I suggested this to friends in the Pentagon they said, “My God, that’s terrible. Having to kill someone would distort the President’s judgment. He might never push the button.“
And that’s truly the heart of the deterrence, isn’t it? That mixture of the the coldly logical and the deeply emotional — the fact that in some essentially way, both of these valences are essential for the concept to work, and yet, they are also both deeply incompatible in some way. For how many people can remain coldly logical if they have to engage the truly personal head-on, as human beings?
One parting anecdote: J. Robert Oppenheimer, in 1953, famously compared the nuclear situation to “two scorpions in a bottle, each capable of killing the other, but only at the risk of his own life.” Sometime later, a newsman attempted to replicate the visual metaphor on television, and got himself stung by one of the scorpions in the process. I.I. Rabi wrote to Oppenheimer that the physicist’s many enemies would probably blame that on him, too.
December 2014 update: the Fisher story was featured in a Radiolab episode, “Buttons without Buttons,” that I participated in.