Redactions

How many people died at Hiroshima and Nagasaki?

by Alex Wellerstein, published August 4th, 2020

One question I’ve been asked a lot by journalists is, “how many people died at Hiroshima and Nagasaki?” The reason they ask isn’t because they can’t use Google, it’s because if you start hunting around you’ll get lots of different answers to this question — answers that vary by a factor of two or so. 

So I wrote an article for the Bulletin of the Atomic Scientists that answers this question, and went live today: Counting the Dead at Hiroshima and Nagasaki. It goes over the various attempts that have been made since 1945 to come up with estimates on this, and the methodology behind making these kinds of estimates. I ended up tracking down just about every estimate I could find on the casualties, and was pleased to find that I could write a pretty decent history of these efforts despite being limited almost entirely to what was available online during a pandemic (I had to buy one book, in the end).

According to the Hiroshima Peace Memorial Museum, this photograph is of the Noboricho Elementary School circa the 1930s. The current Noboricho Elementary School is about 1 kilometer from ground zero at Hiroshima, in a range that inflicted around 98% fatalities on schoolchildren. As my article explains, school records were of particular interest and use to US military efforts to develop the distance-mortality curve which was used to calculate overall casualty rates. Of all of the photographs at the Hiroshima museum, this is the one I found most arresting, because the joy on these faces — both the children and the teacher — is so recognizable, and thus so tragic.

If you’re just looking for “the answer,” this is the paragraph that sums up the general gist of it:

There is, I think it should be clear, no simple answer to this. In practice, authors and reports seem to cluster around two numbers, which I will call the “low” and the “high” estimates. The “low” estimates are those derived from the estimates of the 1940s: around 70,000 dead at Hiroshima, and around 40,000 dead at Nagasaki, for 110,000 total dead. The “high” estimates are those that derive from the 1977 re-estimation: around 140,000 dead at Hiroshima, and around 70,000 dead at Nagasaki, for a total of 210,000 total dead. Given that the “high” estimates are almost double the “low” estimates, this is a significant difference. There is no intellectually defensible reason to assume that, for example, an average (105,000 dead at Hiroshima, 55,000 dead at Nagasaki) would be more accurate or meaningful.

The “low” estimates come from US government (and US military) efforts in the 1940s to estimate the dead. I get the sense that they were trying to come up with real numbers here, not trying to guess low, but there are some real methodological shortcomings with their source terms. Essentially, these estimates are heavily dependent on how many people you think were in Hiroshima and Nagasaki on the days of the bombings. There are various ways to try and guess this, but ultimately there are reasons to think that all “official” figures you will find will be missing a lot of people. The people who made these estimates were well-aware of their shortcomings. As Stafford Warren, who led the Manhattan Project casualty estimate effort, explained before Congress: “I am embarrassed by the fact that even though I led a medical party which was supposed to get figures on the mortality, and so on, that we could not come back with any definitive figures that I would be able to say were more than a guess.”

The “high” estimates come from attempts in the 1970s, led by the Japanese and international scientists, to come up with a new tally that takes into account some known populations who were left out of the original estimate, notably Korean forced laborers, commuters, and other groups that weren’t included in the kinds of statistics the government and military were looking at. To be sure, the people who did this felt that there was something unjust about undercounting the dead, and so there is a clear political angle to having higher numbers as well. But their work is similarly meticulous and well-argued, so there’s no easy way to just say, “oh, they’re too high.” 

My recommendation for people wanting to use a number is to say who made it. If you want to use the “low” estimate, that’s fine — just state that it was generated by the US military. If you want to use the “high” estimate, that’s fine — just state that it was developed by a group of international scientists in the 1970s. Even better would be to include both, but that gets a bit wordier than most people want to use. For me, the aim is to make sure that these numbers are not just seen as something that is “simply known,” but were estimated by one group or another. 

The article also discusses why these numbers matter. The choice of “low” or “high” numbers is probably not incidental; I see the “low” numbers in sources that tend to emphasize the need for the atomic bombs, and the “high” numbers in sources that emphasize the suffering of the victims. That makes sense, but given that we don’t really have a good way to tell whether either of these sets are correct, I think it is worth being slightly cautious about putting too much political or moral weight upon the raw figures alone. Or to put it another way, if your argument depends on one set of these numbers being the “right” set to use… it’s probably worth thinking through the argument a little bit more. 

Meditations

What if the Trinity test had failed?

by Alex Wellerstein, published July 16th, 2020

Today marks the 75th anniversary of the Trinity test. My thoughts on the world’s first nuclear explosion haven’t changed too much over the last five years, so you can read that if you want my “anniversary take.” The only thing that’s really changed in my thinking since then is that I’ve was able to visit the site last summer, and spent some time in the area around it (Socorro, Alamagordo, etc.). It’s beautiful country out there, and compared to my normal (NYC-adjacent) environs, it still feels pretty isolated and sparse. The distances are just large in such a place — it takes you a long time to get anywhere, and driving 50 miles is no big thing. Anyway, I’ll have some opportunity to post a bit more about that trip later in the summer, I believe, so I’ll hold off for now.

The obelisk at the Trinity site, July 2019. Source: Photo by author.

What I’ve been thinking about lately is a question I have been asked a few times in the ramp-up to this year’s anniversary: What is the importance of the Trinity test? I’ve found it surprisingly difficult to answer more than superficially. Of course, I can easily explain the context of what the test was, why it was done, and what followed it. But “importance” implies, to my mind, a counterfactual: that something different would have happened, historically, if it had not occurred, or occurred differently.

I’ve written on counterfactuals before, and I suspect I find them more interesting than many historians. The “official historian response” to counterfactuals is to say, “well, we really can’t know what might have happened, since it is hard enough to know what did happen,” and I can co-sign that. But counterfactual questions can be a way to focus on why we think something was important, and it can be a useful way to think through what we do know about the past. So I often find them to be useful exercises, so long as you don’t put too much stock in their “reality.” (And I was always a fan of Marvel’s What If? series, for a less intellectual justification.)

So my question today is: What if the Trinity test had failed? 

Modes of failure

One of the reasons this feels like a somewhat jarring question is that we slide easily from “this is how history happened” to “this is how history must have happened.” We know the test was a “success,” and that colors everything we think about it and its preparations. But the chances of Trinity failing in one way or another were not all that low. There’s a lot that could have gone wrong with it.

Even after Trinity, J. Robert Oppenheimer estimated relatively high chances of the “combat unit” failing:

The possibilities of a less than optimal performance of the Little Boy are quite small and should be ignored. The possibility that the first combat plutonium Fat Man will give a less than optimal performance is about twelve percent. There is about a six percent chance that the energy release will be under five thousand tons, and about a two percent chance that it will be under one thousand tons. It should not be much less than one thousand tons unless there is an actual malfunctioning of some of the components.

It is probably not desirable to attempt at destination to establish, on a statistical basis, the reliability of Fat Man Components. On the other hand, it is desirable to subject the components scheduled for hot use to inspection and testing with the greatest care.

Oppenheimer’s estimates are high-enough given the stakes of the test, but the big question is un-estimated part: the failure of a component. Because the “Gadget,” and its weaponized form, Fat Man, had a lot of components. And they were all capable of failure. The implosion design required a lot of things to work just right, in order to to get the simultaneous detonation (within a tolerance measured in nanoseconds) and correct shaping of the compressive forces that symmetrically shrunk the solid-metal plutonium core to over half its original volume. This is why they were having the Trinity test in the first place: they didn’t know if it could be done at all, and even if it could be done, they didn’t know how well it would work.

An official diagram of the “Gadget,” snug in its casing as the Fat Man bomb. This incredible image comes from a manual that John Coster-Mullen received under the Freedom of Information Act, and the overall document describes all of the many activities that have to be done just right to make one of these fire correctly. There’s a lot that could go wrong; these were not “GI-proof.” Source: “Maintenance and Instruction Manual for Mark III Atomic Bomb Fuze,” Project Y, January 1946, courtesy of the intrepid John Coster-Mullen.

So let’s imagine three possible modes of “failure” for the test. The first is not really a failure at all: that “the Gadget” had gone off with the yield that it had been expected to have, prior to testing. This was around 4-5 kilotons, not the 20 kilotons it turned out to be. So that would have not been considered a failure by the scientists, but it would have made the plutonium bomb considerably smaller than their projections for the uranium bomb. We’ll round that up to 5 kilotons for simplicity’s sake. 

A second possibility could be a result at the low-end of what they thought was plausible: a few hundred tons of TNT equivalent. Let’s say 500 tons, just to pick a number. That means that the Gadget would be seriously underperforming (their goal was at least a kiloton), but still a usable weapon. It would not meet their stated criteria for a usable atomic bomb (they had set that at 1 kiloton), but it would still be something you could drop on an enemy.

And an ignominious third possibility would be a total failure, a “fizzle” of zero nuclear yield.  This would be the true component failure that Oppenheimer mentioned: a problem with the detonation system, or a major flaw with the lens system. There would be about 5 tons of TNT equivalent result from the high explosive system, which would destroy the “Gadget” and scatter its plutonium. Again, this is not implausible at all — this was a new weapon, and these components were custom-made, and every technical system has a rate of failure. The scientists knew this was a real possibility; some of them even bet on it!

Prior to the Trinity test, the scientists had considered the (uranium-fueled) Little Boy bomb to be their “big” bomb, because they had always assumed it would be capable of hitting at least 5 kilotons, and probably around 15 kilotons. The (plutonium-fueled) Fat Man was guessed to have a yield that would range from a few hundred tons up to 5 kilotons. The advantage of the Fat Man design was that you could make more of them: their production rates, in late 1945, were about half of a Little Boy bomb’s worth of enriched uranium per month, compared to three Fat Man bombs worth of plutonium per month. So they saw their probable future capabilities as one big atomic bomb every couple months, with a few smaller atomic bombs in between. The actual Trinity test revealed that the Fat Man bombs were in fact more powerful than they had expected the Little Boy bombs to be.

A relevant excerpt from the notes of the second meeting of the Target Committee, from May 1945, which describe the range of expected yields at that time for the two weapons.

The technical implications for these different modes of failure, as I see it, are fairly straightforward. If the Trinity test was as powerful as they expected it to be (5 kt), then it would not have changed much about how they saw the situation — 5 kilotons is still nothing to scoff at. If instead it had been on the low end (500 tons), then that would have been disappointing, but still within the realm of possibility, and I don’t think they would have done too much different, technically, other than try and figure out what the issue had been that had resulted in the reduced yield.

But if the test had totally failed to give a nuclear yield — I think they would have had to do another test. That was certainly their original plan in the event of a failure. That would have taken several weeks to prepare, at best. The original Trinity test had taken months to prepare, but a Trinity failure would still have done damage to the tower, and probably contaminated the site with plutonium. A “quick and dirty” Trinity test, where they just set one off somewhere, wouldn’t have given them the data they needed (and in the face of a total failure, I don’t see them thinking “quick and dirty” would be the way to go — especially given how precious their plutonium stockpile was). So I think that would have essentially set back the possibility of using a plutonium bomb on Japan by a month or so at the minimum. 

What if the cause had been something more nefarious, like intentional sabotage? (An idea floated in the sadly-cancelled Manhattan television show that I consulted for some years back.) I think this would have been the “total failure” outcome, plus a lot more security, hand-wringing, and paranoia. Which is to say: something very different, and not something I feel confident at all predicting, but a really interesting question.

Strategic and diplomatic implications

The success of the Trinity test told the US policymakers and military planners that atomic bombs worked, and that they would have a fair number of them over time. Both of these would have been challenged to different degrees by a failure.

A 5 kt Trinity probably wouldn’t have changed that much. Again, this was the expectation. It might have changed how the bombs were deployed, though. A 5 kiloton explosion would do roughly 40% as much damage as 15 kiloton one. To put that in terms of raw effect, if you detonated a 5 kiloton bomb over Hiroshima today at the ideal blast height, you’d kill maybe 47,000 people (according to NUKEMAP), as compared to over 80,000 people for 15 kilotons. That’s still a pretty powerful weapon. But it would be conspicuously less powerful than the Little Boy bomb. So it’s possible they might have imagined using it for purposes other than destroying entire cities, such as targeting specific military bases. But overall I think this is probably “close enough” that their existing assumptions would still have been likely maintained.

NUKEMAP screenshots of the effects of three blasts over Hiroshima, each set at the ideal height of burst for maximizing the range of the 5 psi blast range for their yield: 15,000 tons of TNT (15 kt), 5,000 tons of TNT (5 kt), and 500 tons of TNT (0.5 kt). From largest to smallest (roughly, as the effects scale differently over this range), the rings show: 1 psi blast pressure (very light gray), the maximum distance for 3rd degree burns from the thermal radiation (orange), the area of 5 psi blast damage (gray), the area of 500 rem ionizing radiation exposure in free air (green), and size of the fireball (yellow). Source: NUKEMAP / Map data © OpenStreetMap Contributors, CC-BY-SA, Imagery © Mapbox.

A 500 ton Trinity, on the other hand, is much less powerful than they had wanted the atomic bomb to be. It would still kill 15,000 people if dropped on Hiroshima today. But much of the city would still be intact — the psychological effect would be far diminished. And compared to the 15 kiloton bomb, it would have looked relatively paltry. Again, I think they might still have considered using it, but I don’t see them “wasting” any their precious-few “reserved” targets on it — they’d be saving those for the big bombs, and using the smaller atomic bombs for other purposes. So I think this would really shake up how they saw their arsenal and their use of it.

It’s also possible that depending on how well they thought they understood the failure, that it might impact their sensibilities on the Little Boy bomb as well. The scientists had high confidence that the gun-type design would work, and it was easier to confirm the principles behind it without a full-scale test. Would their confidence have been shaken? If their diagnostics of the Trinity test told them that the detonator system had worked as planned, then they might have worried that their deeper understanding of a fission bomb was incorrect. But if they thought it was just an assembly problem — something unique to the implosion design — then they’d probably have still been confident about the gun-type arrangement. 

But the policymakers and military brass would probably have been a lot less confident. Outside of Groves, none of the other military leaders had a deep understanding of the bomb, and several expressed extreme pessimism about its prospects prior to Trinity. A Trinity failure would have reinforced these perspectives. It’s possible they might have judged the entire thing not ready for prime time, and scuttled any use plans until they were confident that it wouldn’t be an embarrassment.

And a failed Trinity would, as noted, probably mean that they would have extreme delays in their plutonium bomb capabilities. I think they’d still want to use the uranium bomb as soon as possible. But they’d know that they would not be able to follow it up with more attacks for some time. Maybe they’d try to bluff about that, or maybe they’d just downplay how much destruction they’d be delivering that way, I don’t know. But I think they’d consider it a pretty different situation.

Stalin, Truman, and Churchill at the Potsdam Conference on the day after Trinity (July 17, 1945). Source: Harry S. Truman Presidential Library.

What political implications would this have? Truman received the news of the successful Trinity test with great excitement, and it bolstered his confidence greatly with regards to the end of the war. I suspect that wouldn’t have changed with a 5 kiloton result, but with a 500 ton result, that would probably have been diminished. With a total failure, I think the opposite would have occurred: he might have gotten even more depressed about dealing with the Soviet Union, and with the prospects of an unconditional surrender by the Japanese.

So what might he done differently if that was the case? The two places I can see him modifying his approach would be in his dealings with Stalin, and the question of unconditional surrender and the Potsdam Declaration.

After getting the successful results from Trinity, Truman took a very hard line with Stalin. He believed that the bomb gave him leverage for both the end of World War II and the peace that would follow. Though he did not try to argue that the Soviets should not declare war on Japan or stop their invasion plans, he was less convinced he would need the Soviet entry into the war, and did not encourage them. Without the confidence from Trinity, would he have pushed so hard? I’m not sure he would have; he might have felt the Soviet invasion too necessary for the end of the war to risk alienation. And if he had taken a more compromising approach, what would the impact of that had been on the later Cold War to follow? The Cold War was a complex thing, not the result of a single interaction, but there are scholars who have attributed some of its formation and angst to Truman’s post-Trinity bravado, so it’s not outside the realm of contemplation.

On the Potsdam Declaration, there were members of Truman’s cabinet and the military staff who were trying to put a sentence into the Allied dictum towards Japan that would clarify the position of the Emperor. They knew, from intercepted Japanese communications, that this was a sticking point for even those members of the Japanese high command who were interested in pursuing an end to the war. But Truman, with the pushing of his Secretary of State, James Byrnes, pushed back hard on this, and deliberately did not make things “easier” for the Japanese in this respect. One can argue whether that was the right thing to do or not (that is a separate question), but would he have taken such a hard-line with Japan if he didn’t have positive news from Trinity? Again, I doubt it — he would have been less sure of his own position, and may have listened to those who cautioned him towards moderation.

Would they have still used the Little Boy bomb on the original schedule, to be dropped just after the Postdam Conference ended? I think this depends on what they thought the nature of the failure was for the Trinity test — if it was something particular to the implosion device specifically, then I think they would have continued as planned. If it was something that caused them more fundamental doubts, then I they might have waited until those doubts could be resolved. I don’t think their messaging on the atomic bomb would be significantly affected, however; Truman’s declaration at Hiroshima is basically a “one bomb” announcement (and I am not sure he realized that more bombs would be coming soon afterwards). 

War outcomes

Would a failure at Trinity have changed the outcome of World War II in any significant way? Ultimately this question relies on what you think caused the Japanese to offer up a conditional surrender on August 10th, and then an unconditional surrender on August 14th. In particular, it depends on how important you think the Nagasaki bombing was for the decision-making of the Japanese high command. 

There were a number of overlapping factors that contributed to Japanese surrender, and it is not always clear how much weight to assign each of them. The bombing of Hiroshima, the Soviet declaration of war and invasion of Manchuria, the bombing of Nagasaki, the US rejection of the conditional surrender offer, an abortive coup by junior Japanese officers, and an intensification of conventional bombing all took place over the space of less than a week. If we imagine the only thing missing from that list was Nagasaki, does it matter? Or if Nagasaki had used a much less damaging atomic bomb, would it matter?

On the left, the wispy remains of the mushroom cloud rise over the Trinity test site in New Mexico (and one of my favorite, unusual photographs of the test — one of the few that emphasize Trinity’s cloud). At right, a photograph I took just outside the White Sands Missile Range in summer 2019 — not exactly the same vantage point (I suspect the original was taken on a high ridge or from a plane), but pretty close. Source (left): Los Alamos National Laboratory, TR-239.

While it is hard to disentangle these things, the bombing of Nagasaki clearly left less of a distinct impression on the Japanese command than the other factors. It may have reinforced the feeling of hopelessness that lead to the surrender offers, but it’s not clear it was necessary for them. The Japanese do not seem to have seriously doubted that the US had only one atomic bomb, and in any event, the other factors — such as the Soviet invasion — clearly weighed very largely on their minds. 

So it’s not implausible to suggest that the war would have ended around when it did, even without Trinity being successful. That being said, the Japanese decision to surrender was not over-determined in any way. If the Japanese don’t offer up a surrender of some sort by August 10th, what then? Do they offer it up before a second Trinity can be tested, or a second gun-type bomb is ready? Here we end up in the weeds of speculation, beyond what we can know with much confidence. Or to put it another way, here is where whatever preconceptions you have about the end of the war will dominate: if you’re a “bombs did it” sort of person, then you’ll favor that kind of interpretation, and if you’re a “Soviets did it” sort of person, you’ll favor that one. I’m on the fence, though I lean towards the interpretations that show Nagasaki as not being that important, so I see the war ending around the same time it did anyway. This is a separate question from, “what if no atomic bombs were dropped on Japan,” which is even more contentious.

If that’s the case, then the Trinity test was not so important for that particular end, though those diplomatic decisions might have had long-term consequences. One thing is clear, though: if Trinity had failed, we’d talk about the Manhattan Project in a different light. It wouldn’t seem so “inevitable” that something “Manhattan Project-like” would succeed, and perhaps we wouldn’t be so quick to deploy allusions to it when talking about big science projects. 

The irony of this whole discussion is that the Trinity test is almost always framed as an important and impressive achievement. But what if it only meant that Nagasaki was avoided, and the war still ended? In that light, maybe it would have been a better thing if it hadn’t had worked so well. To say that feels heretical, and I’m not sure I believe it. But it’s a provocative idea.

Either way, it’s easy to conclude, I think, that the Trinity test “mattered,” at least by the counterfactual criteria set out at the beginning: if it had failed, we’d have ended up in a somewhat different world. The interesting question to ask is whether that different world would be better or worse than the one live in now — and that is surprisingly unclear.

Meditations

What journalists should know about the atomic bombings

by Alex Wellerstein, published June 9th, 2020

As we approach the 75th anniversary of the atomic bombings of Hiroshima and Nagasaki, even with everything else going on this year, we’re certainly going to see an up-tick in atomic bomb-related historical content in the news. As arbitrary as 5/10 year anniversaries are, they can be a useful opportunity to reengage the public on historical topics, and the atomic bombs are, I think, pretty important historical topics: not just because they are interesting and influential to what came later, but because Americans in particular use the atomic bombings as a short-hand for thinking about vitally important present-day issues like the ends justifying the means, who the appropriate targets of war are, and the use of force in general. Unfortunately, quite a lot of what Americans think they know about the atomic bombs is dramatically out of alignment with how historians understand them, and this shapes their takes on these present-day issues as well. 

The beginnings of the Hiroshima anniversary night lantern ceremony, 2017, which I attended.

Having seen the same recycled stories again and again (and again, and again), I thought it might be worth compiling a little list of what journalists writing stories about the atomic bombings ought to know. This isn’t an extensive debunking-misconceptions list (I’ll probably write another one of those another time), or a pushing-a-particular perspective list, so much as an attempt to talk about the broader framework of thinking about the bombings, so far as scholarship has advanced past where it was in the 1990s, which was the last time that the broader popular narratives about the atomic bombings were “updated.”

I’m tilting this towards journalists — particularly journalists trying to represent this from an American perspective, so frequently (but exclusively) American journalists — not because they get it particularly wrong (they often do get it wrong, but so do even academics who don’t study this topic in detail), but because they are usually the primary conduit of this history to the broader public. And because, in my experience, most journalists who want to write on this topic are not bungling it because they are trying to push an agenda (that occasionally does happen, to be sure), but because they don’t know any better, because they aren’t reading academic history, and don’t talk to academic historians on this topic.

One thing I want to say up front: there are many legitimate interpretations of the atomic bombings. Were they a good thing, or a bad thing? Were they moral acts, or essentially war crimes? Were they necessary, or not? Were they avoidable, or were they inevitable, once the US had the weapons? What would the most likely scenario have been if they weren’t used? How should we think about their legacies? And so on, and so on. I’m not saying you have to subscribe to any one answer to those. However, a lot of people are essentially forced into one answer or another by bad historical takes, including bad historical takes that are systematically taught in US schools. There’s lot of room for disagreement, but let’s make sure we’re all on the same page about the broad historical facts, first. It is totally possible to agree with all of the below and think the atomic bombings were justified, and it’s totally possible to take the exact opposite position.

There was no “decision to use bomb”

The biggest and most important thing that one ought to know is that there was no “decision to use the atomic bomb” in the sense that the phrase implies. Truman did not weigh the advantages and disadvantages of using the atomic bomb, nor did he see it as a choice between invasion or bombing. This particular “decision” narrative, in which Truman unilaterally decides that the bombing was the lesser of two evils, is a postwar fabrication, developed by the people who used the atomic bomb (notably General Groves and Secretary of War Stimson, but encouraged by Truman himself later) as a way of rationalizing and justifying the bombings in the face of growing unease and criticism about them. 

The article in Harper’s by Henry Stimson, published in early 1947, is where the “decision to use the bomb” narrative was first put into its most persuasive and expansive form. I find it is useful to point out that even the “orthodox” narratives have their origins as well — people frequently treat them as if they fell out of the sky or were handed down on tablets.

What did happen was far more complicated, multifaceted, and at times chaotic — like most real history. The idea that the bomb would be used was assumed by nearly everyone who was involved in its production at a high level, which did not include Truman (who was excluded until after Roosevelt’s death). There were a few voices against its use, but there were far more people who assumed that it was built to be used. There were many reasons why people wanted it to be used, including ending the war as soon as possible, and very few reasons not to use it. Saving Japanese lives was just not a goal — it was never an elaborate moral calculus of that sort. Rather than one big “decision,” the atomic bombings were the product of a multitude of many smaller decisions and assumptions that stretched back into late 1942, when the Manhattan Project really got started. 

This is not to say there were not decisions made along the line. There were lots of decisions made, about the type of bomb being built, the kind of fuzing used for it (which determines what kinds of targets it would be ideal against), the types of targets… Truman wasn’t part of these. His role was extremely peripheral to the entire endeavor. As General Groves put it, Truman’s role was “one of noninterference—basically, a decision not to upset the existing plans.”

Truman was involved in only two major issues relating to the atomic bomb decision-making during World War II. These were concurring with Stimson’s recommendations about the non-bombing of Kyoto (and the bombing of Hiroshima instead), which I have written about (and now published about) at some length. The other is the (not-unrelated, I argue) decision on August 10, 1945, to halt further atomic bombings (at least temporarily) because, as he put it to his cabinet meeting, “the thought of wiping out another 100,000 people was too horrible. He didn’t like the idea of killing, as he said, ‘all those kids.’”

It was never a question of “bomb or invade”

Part of the “decision” narrative above is the idea that there were only two choices: use the atomic bombs, or have a bloody land-invasion of Japan. This is another one of those clever rhetorical traps created in the postwar to justify the atomic bombings, and if you accept its framing then you will have a hard time concluding that the atomic bombings were a good idea or not. And maybe that’s how you feel about the bombings — it’s certainly a position one can take — but let’s be clear: this framing is not how the planners at the time saw the issue.

The plan was to bomb and to invade, and to have the Soviet invade, and to blockade, and so on. It was an “everything and the kitchen sink” approach to ending the war with Japan, though there were a few things missing from the “everything,” like modifying the unconditional surrender requirements that the Americans knew (through intercepted communications) were causing the Japanese considerable difficulty in accepting surrender. I’ve written about the possible alternatives to the atomic bombings before, so I won’t go into them in any detail, but I think it’s important to recognize that the way the bombings were done (two atomic bombs on two cities within three days of each other) was not according to some grand plan at all, but because of choices, some very “small scale” (local personnel working on Tinian, with no consultation with the President or cabinet members at all), made by people who could not predict the future.

Soldiers on Tinian prepare to load the “Fat Man” bomb for the second atomic bombing mission. Source: National Archives (77-BT-174).

While we are on the subject, we should note that many of the casualty estimates on the invasion have become grossly inflated after the war ended. The estimates that the generals accepted at the time, and related to Truman, were that there would be on the order of tens of thousands of American casualties from a full invasion (and casualties include the injured, not just the dead). That’s nothing to scoff at, but it’s not the hundreds of thousands to even millions of deaths that have sometimes been invoked. Be wary of this kind of counterfactual casualty estimation, especially when it is done in the service of a conclusion that is already agreed upon. It’s easy to imagine the worst-case scenarios that didn’t happen, and to use these to justify the awful things that did happen. This is bad reasoning, and a bad approach to moral thinking — it is a particularly insidious form of “ends justify the means” reasoning which can justify damned near anything in the name of imaginative alternatives.

All that being said, I want to just say that I’m not necessarily saying that other alternatives should have been pursued. That’s not really a position I feel comfortable staking out, only because I can’t predict what would have happened if they had done other things as well. One can easily imagine the Japanese deciding to keep fighting even in spite of everything, just compounding the death and leading into a pretty grim postwar world. Frankly, when one looks at the chaotic Japanese decision-making that went into their actual surrender (more on that in a minute) it is actually quite clear that they only barely surrendered when they did. So I would not want to say, “they should have done it another way” with any confidence. But I do think it is important to point out that none of it was inevitable, and that some of the justifications for why they did it are quite overstated.

Separately, it is worth pointing out, because this is often obscured, that the invasion of Kyushu was not scheduled to begin until November 1945. There are many framings of this that make it look like the invasion was about to start immediately after the atomic bombings, and this just isn’t the case. The invasion of Honshu had not yet been authorized, but would not have begun until early 1946. This doesn’t change whether they would be awful or anything like that, but the tightening of the timeline, to make it seem like both were imminent, is part of the rhetorical strategy to justify using the atomic bombs.

There were many reasons that the Americans wanted to drop the atomic bombs

There are two main explanations given to why the Americans dropped the atomic bombs. One is the “decision to use the bomb” narrative already outlined (end the war to avoid an invasion). The other, which is common in more left-leaning, anti-bombing historical studies, is that they did it to scare the Soviet Union (to show they had a new weapon). This latter position is sometimes called the Alperovitz thesis, because Gar Alperovitz did a lot of work to popularize and defend it in the 1980s and 1990s. It’s older than that, for whatever that is worth.

Time magazine covers from 1945: Stalin, Truman, Hirohito. Each kind of tacky in their own way.

When I talk to students about the atomic bombings, I usually have them tell me what they know of them. Maybe 80% know the “decision to use the bomb” narrative (we could probably call this the Stimson narrative if we wanted to be consistent). I chart this out on the whiteboard, highlighting the key facts of it. A few in each class know the Alperovitz narrative, which they got from various alternative sources (like Oliver Stone and Peter Kuznick’s The Untold History of the United States documentary). We then discuss the implications of each — what does believing in either make you feel about World War II, the atomic bombings, about the United States as a world power today?

And then I tell them that historians today tend to reject both of these narratives. Which makes them want to throw their hands up in frustration, I am sure, but that’s what scholarship is about.

I’ve written a bit on this in the past, but the short version is that historians have found that both of these narratives are far too clean and neat: they both assume that the nation had a single driving purpose in using the atomic bombs. This isn’t the case. (And, spoiler alert, it’s almost never the case.) As already noted, the process had many different parts to it, and no single “decision” at all, and so one can find historical figures who had many different perspectives. What is interesting is that for those involved in the making of the bomb, and the highest-level decisions about its use, almost all of those perspectives converged on the idea of using it.

So there certainly were people who hoped it would end the war quickly to avoid invasion. There were also those who hoped it would end the war before the Soviets declared war on the Japanese, giving the US a freer hand in Asia in the postwar period. (More on that in a moment.) There were also those who considered it “just another weapon” and attached no special significance to its use. And there were those who took the entirely opposite approach, seeing it as a herald of a future nuclear arms race, and who believed that the best first use of the bomb ought to be the one that laid bare its horrible spectacle. (Personally, I find this position the most historically and intellectually interesting — the idea that by using the bombs on cities in World War II, you’d prevent nuclear weapons from being ever used in anger again.)

And there were those who thought that one of the “bonuses” of using the bomb was to scare the Soviets. It’s not just a “revisionist” (a term I hate) idea — one can document it pretty easily (and Alperovitz does). This strain of thinking was particularly prominent in the thinking of Secretary of State James Byrnes, whose advocacy of “atomic diplomacy” against the Soviets was explicit. It’s not a goofy idea. The question is whether it is the whole story — and it’s not. 

Where both the Stimson and Alperovitz narratives fail is that they insist that there were only singular reasons to use the atomic bomb. But there were many people involved with it, and thus many different motivations. That’s not a problem if you want to argue for or against the atomic bombings — but implying it is just one or the other is a misrepresentation of this history, and also of how people generally operate. People are complicated.

It isn’t clear the atomic bombs ended World War II

My least favorite way in which the end of World War II is discussed goes along these lines: “The atomic bombs ended World War II.” My second-least favorite way is a weaker variant that is becoming more common: “A few days after the atomic bombs were dropped, the Japanese surrendered.” Note the latter doesn’t really say the bombs did it… but implies it very strongly. 

Scholars have known for a long time that the end of World War II was an immensely complicated event. Several events happened within the space of a few days, including:

  • The bombing of the city of Hiroshima
  • The Soviet declaration of war against the Japanese, and subsequent invasion of Manchuria
  • The bombing of the city of Nagasaki
  • Internal friction within the Japanese high command
  • An attempted coup by junior military officers
  • An offer of surrender that still maintained the status of the Emperor
  • A rejection of this offer by the Americans
  • An increase of American conventional bombing
  • An acceptance of unconditional surrendered by the Emperor himself

If I tried to write out all of the events that the above encapsulate, this post would get very long indeed. The point I want to make, though, is that it isn’t some simple matter of “atomic bombs = unconditional surrender.” Even with two atomic bombs and the Soviet invasion, the Japanese high command still didn’t offer unconditional surrender! It was a very close thing all around, and it strikes me as impossible to totally disentangle all of the causes of their surrender.

I think it is fair to say that the atomic bombs played a role in the Japanese surrender. It is clear they were one of the issues on their mind, both those in the military who wanted the country to resist invasion as bloodily as possible (with the hope of making the Allies accept more favorable terms for Japan), and those who wanted a diplomatic end of the war (though even those did not imagine accepting unconditional surrender — they wanted to preserve the imperial house).

Russian image of the Soviet invasion of Manchuria, 9 August-2 September 1945. There’s something about all that red that, I think, underscores how disastrous the Japanese would have seen this.

It is also clear that the Soviet declaration of war and subsequent invasion of Manchuria loomed largely in all of their minds as well. Which is more important? Could we imagine the same results occurring if one hadn’t occurred? I don’t know. It’s complicated. It’s messy. Like the real world.

There are some who believe the war could have been ended without the atomic bombings (especially the bombing of Nagasaki, which does not appear to have changed things in the minds of the Japanese high command). I’ve never been totally convinced of their arguments; they strike me as a bit overly optimistic. But I think it is also clear that the Soviet role deserves far more attention than it typically gets in American versions of this story; it is easy to document its impact. And then again, one can ask how much would have been changed if the unconditional surrender requirement had been modified earlier on, like some American advisors had urged. The tricky thing about history, though, is you can’t just rewind it, change a few variables, and run it forward again. So it’s hard to have any confidence about such predictions.

Still, it’s worth noting that the relationship between the atomic bombs and the Japanese surrender is a complicated one. In the US, claiming the atomic bombs ended the war has historically been a way of writing the Soviets out of the picture, as well as making the case for the use of the bombs stronger. I think both of these make for bad history.

The bombings were controversial in their time

While the majority of Americans supported the atomic bombings at the time — they were, after all, told they had ended World War II, saved huge numbers of lives, and so on — it is worth just noting that approval was not universal, and that people questioning whether they had to be used, or whether they had ended World War II, were not fringe. Some of the most critical voices about the atomic bombings came from military figures who, for a variety of reasons, would come out against the bombings in the years afterwards. The US Strategic Bombing Survey, a military-led assessment of bombing effectiveness in World War II, concluded in July 1946 that:

Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved, it is the Survey’s opinion that certainly prior to 31 December 1945, and in all probability prior to 1 November 1945, Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war, and even if no [American] invasion had been planned or contemplated.

Which is a pretty shocking statement to read! I’m not saying you have to agree with it; you’re not bound by isolated judgments of the past, and there are good reasons to doubt the reasoning of the USSBS (they were acting in part out of fear that the atomic bombs would overshadow conventional bombing efforts and undercut their desire for a large and independent Air Force). But I think it is useful to point out that doubtful voices are not a recent thing, nor are they exclusively associated with the “obvious” points of views (like those sympathetic to Communism or the Soviets). The past is complicated, and the people in the past were complicated as well. I have frequently observed that the people who tell us not to impose present-day judgments on the past are unaware that many of these same judgments were made in the past as well.

In conclusion: talk to historians!

Nothing I have written above is, I don’t think, terribly unknown to historians doing active work on this topic. They might have different takes on these things than I do (in fact, I know many of them do), and that’s fine. Historians disagree with each other. That’s part of the fun of it, and why it is an active area of research. There are lots of historical interpretations, lots of historical narratives, lots of juicy stories and hot takes.

But you wouldn’t know this from most historical coverage of the atomic bombings, especially when anniversaries roll around. There are probably non-insidious reasons for this. I get that journalists don’t have time to read every piece of scholarship that has come out since the last 5 year anniversary, or even since the 1990s (most of the mythical discussions are stuck in the “culture wars” version of the atomic bomb story, best exemplified by the Smithsonian’s Enola Gay controversy of 1995). Journalists work on lots of topics, I get it.

Hello Kitty at the Atomic Bomb Dome

Something I saw for sale at a gift shop in the Grand Prince Hotel, Hiroshima, in 2017. Aside from its oddness, I like to use it when teaching to talk about how the bombings can mean different things to different people — this is the Hiroshima dome as a symbol of peace, not a symbol of destruction.

But journalists — you can reach out to us! There are many historians doing interesting work on these topics. Just get in touch. We’ll talk to you. And one question you should always ask any historians you contact is: “who else should I talk to?” Because some of us historians are more prominent than others (in that our names and websites come up when you Google them), but that doesn’t mean we’re the only ones out here. We’ll happily put you in touch with scholars who are well-known within our discipline, but harder to see outside of it. Because they’ve got interesting things to say, too.

And if you really aren’t sure who those might be, the author list of this 2020 volume from Princeton University Press (in which I published my article on Kyoto) is a good place to start!

Why do this? Because a) it’s better history, b) these historical narratives are tied to a lot of other narratives (like debates about the morality of war, for example), and c) because provocative, interesting, hot stories will practically write themselves if you talk to scholars working on the cutting-edge of this work. Let us help you! It’s win-win!

There are other things, of course, that a journalist ought to know — but these things, for me, stand out the most as the “big picture” issues that I still see coming up again and again, despite the fact that the scholarship has been beyond them for decades now. Those who do not know the past, at the very least, are in danger of repeating the same bad versions of the past…!


General news update: The COVID-19 crisis dramatically complicated my teaching and research productivity over the last semester, and I’ve been digging myself out of a work-hole ever since. The good news is that some very interesting things are in the works. And I should be posting a few more blog posts this summer! 

Articles

The President and the Bomb, Part IV

by Alex Wellerstein, published January 8th, 2020

It’s been awhile since I’ve written anything in my “President and the Bomb” series. You can read part I, II, and III if you are interested). It’s not that I’ve been inactive; much to the contrary, I’ve been researching this issue as one of my major research agendas, but most of that work has not yet seen the light of day. You can read a version of my work on comparative nuclear command and control schemes here, to give you a flavor of it. (Perhaps ironically, the more I am researching something professionally, the less likely it is to appear on the blog, because I’m tailoring it for other venues.)

Posed photograph that the White House released of their "situation room" crew "monitoring developments in the raid that took out Islamic State leader Abu Bakr al-Baghdadi."

The obviously-posed photograph that the White House released of their “situation room” crew “monitoring developments in the raid that took out Islamic State leader Abu Bakr al-Baghdadi.” I use this just to illustrate that in a situation like the one this post describes, the number of people — and ideas — in the room is likely going to be very limited, and intellectual, ideological, and any other forms of diversity are likely to be very low. And only one person — the one at the head of the table — actually makes the nuclear use decision. Source: Washington Post, October 2019.

The recent weeks’ events — allegedly Iranian attack on the US Embassy in Iraq, the assassination of General Suleimani, and the retaliation by Iranian forces on US military bases in Iraq — have me (like everyone else who is not a warmonger) feeling uneasy. Not only because it looks like careless escalation, but because it fits well into how I’ve been thinking about what one of the most probable “next use” of nuclear weapons might be, and what a lot of our existing “presidential control” gets wrong, in my opinion. 

Usually the question of “whose finger should be on the button” is framed in terms of what is sometimes called the “crazy President problem.” This is not a reference to the current President (except when it is meant that deliberately); it’s an imagined scenario that goes like this: the US President wakes up one day, and, out of the blue, decides to start thermonuclear war. Do the generals comply? (Note that some of the other characters sometimes introduced into this — like “Does the Secretary of Defense refuse?” — are red herrings, because they are not strictly in the chain of command. See my Part III post.)

I get the rhetorical attraction of this way of framing the issue: it makes the issue of unilateral nuclear control very acute. But it’s not very realistic. Why not? Because a) that isn’t how mental illness works (it tends not to flare up in a totally unexpected way among otherwise “sane” people), and b) this is actually the easiest form of this problem to refute. Because a general can say, well, if a nuclear use order came “out of context” — this is their term, and means “out of the blue, without any threat against us” — then of course they would refuse it. Which I more or less believe is true, if you imagine this scenario happening at all.

Screenshot of "Trump Denies Asking Staff to Look into Nuking Hurricanes" from Vanity Fair

I don’t know if Trump actually asked his staff, repeatedly, whether or not hurricanes could be nuked — that I find it plausible perhaps says enough about me and my perceptions of him. This is not the normal framing of the “crazy President” problem but you can see how it feeds into it. Screenshot from Vanity Fair, 26 August 2019.

A much more probable scenario for US nuclear first use, for me, looks like this: a crisis builds in a region where there have historically been crises. There are legitimate security threats from and in that region. Something happens that pushes the President to want to respond with something “big.” The military gives him their standard three options (something bland, something insane, something sensible) with the hope it will force a sensible choice. Sound familiar so far? This is what the reporting on the Suleimani assassination says actually happened.

At this point we ask, would “the extreme option” ever be something like a nuclear attack? I very much doubt that it would be what most people think a nuclear option would look like (“wipe country X off the map”). Aside from being unambiguously a war crime (even by the quite flexible standards used by the military to evaluate strikes as war crimes), it just doesn’t match with my perception of how the military (from what little I know of them) think about how nuclear weapons might be plausibly used. So I don’t worry about that.

Could the “extreme option” be, “use a low-yield, high-accuracy nuclear weapon against an underground, unambiguously military site, that is relatively isolated from civilians?” Now we’re getting much more plausible. Most people, I think, would not consider something like this to be a good idea — we’re trained, rightly or wrongly, to see nuclear weapons as being inherently “large,” as things that necessarily kill many civilians, and that any first use would spiral out of control. Whether those things are true or not, there are plenty of analysts in academia, think tanks, and the military itself who do not see things this way. They believe nuclear escalation can be avoided, that nukes could just be another tool for the job, and that a low-yield, high-accuracy nuclear weapon (like the B61-12 nuclear gravity bomb, or the proposed Low-Yield Trident) would be useful not only as deterrents for tactical weapon use by another nation (which is to say, Russia), but as tools for both sending a big-but-not-crazy message and for destroying deeply fortified underground facilities. 

Image analysis of the accuracy of the B61-12 bomb, taken from video stills released by Sandia National Laboratories.

Image analysis by Hans Kristensen (Federation of American Scientists) of the accuracy of the B61-12 bomb, taken from video stills released by Sandia National Laboratories. Source.

Now there are many good reasons to think that the tradition of non-use for nuclear weapons is a good thing and should be perpetuated as long as possible. The US benefits from non-use more than it would benefit from use becoming normalized to any extent, as the JASON group concluded during the Vietnam War, when there were rumblings that tactical nuclear weapons might improve the US military situation. The US and its military are far more vulnerable to tactical nuclear weapons than many of our enemies (because we tend to centralize our forces), and we have the largest and most advanced conventional military in the world, and so we can afford to eschew low-yield nuclear use. (Remember that our Cold War interest in low-yield nukes was because we felt that the Soviets had overwhelming conventional forces. That’s not the case anymore — we’re the one’s with the overwhelming conventional forces, and so we’re the ones that other nations would be tempted to use low-yield nuclear weapons against, as an “equalizer.”)

I have met some of the scholars and analysts who think low-yield nuclear use might be not a horrible idea (they might not say it was a good idea), and I don’t have any problem with said people as people. I can even see their way of thinking, because I’m a historian of science and I’m trained to be sympathetic to nearly any point of view. I can’t tell how many of these people actually think low-yield nuclear use would be a good idea, and how many of them are being academically contrarian because the bulk of academic thought on nuclear weapons supports the idea that they shouldn’t be used. I respect academic contrarians (they keep us on our feet, and skepticism is useful), but in the context of actual policy I think such ideas might actually be dangerous, because the people “at the top” might not realize how academia works, and that contrarian arguments might sound appealing but there are frequently reasons that they are, in fact, not believed by most people who study these topics. 

So, to return to the thread, could a low-yield nuclear strike be included among the “extreme” options in such a hypothetical scenario? I think the answer is maybe, though I would still put that as unlikely — but it’s going to depend who draws up the menu of options. As we’ve seen in the last few years, the assumption that high-profile policymakers are all qualified for their positions, are not zealots, do not have views widely out of line with any form of consensus politics, etc., is totally unwarranted. So it’s possible, though it would be extreme indeed.

But what if, during this same set of options, someone whispers into the President’s ear, “what if we did that plan I mentioned the other day?” That is, what if there was a senior White House advisor who somehow got it into their head that a low-yield nuclear weapon would be a good idea, had talked about it previously to the President, and then injected it into the discussion? Might the President bring it up himself? And in that context, would the generals go along with it?

Photograph of a meeting at the Hoover Institution, with my name tag prominently in the foreground

A surreptitiously-taken photograph of a meeting I attended at the Hoover Institution in January 2019, sponsored by the Nautilus Institute, which featured a talk and discussion with Admiral John Hyten, then head of Strategic Command. I found it very revealing in terms of how the nuclear military views the present administration — as essentially a wonderful, exciting blank check. Photograph by me and my terrible phone camera. Hyten is the military officer just to the top-left of my name tag. To his right are George Shultz and David Holloway; to his left is the wonderful, late Janne Nolan. 

I have little doubt that the generals would probably try to persuade the President that this was a bad idea. I suspect the President’s senior cabinet would also try to do so, though I am less certain about this. But what if the President insisted on the nuclear option?

This isn’t a “crazy President” situation. This is a “the President is advocating for something that there are actually many rational arguments in favor of, in a context that might plausibly justify it” situation. That doesn’t mean it’s not a bad idea, one that could lead to a lot of long-term grief for the United States. But there’s a difference between a “bad order” and “an order that can be legally disobeyed.” 

This is why low-yield weapons make me uncomfortable. Not just because they might “lower the threshold of nuclear use,” the common objection to them. It’s also because they can “remove the ability of the military to refuse to follow an awful order.” A low-yield nuclear weapon, on an accurate delivery vehicle, might plausibly kill very few civilians if used against an isolated target. It wouldn’t necessarily fall outside any of the guidelines of proportionality, and for certain types of targets (again, underground bunkers and facilities) you can make a plausible argument to their military necessity relative to conventional weapons (they increase your chance of success dramatically). I think the military would have a very hard time refusing such an order. Even if they knew it was a bad idea, one that would hurt America diplomatically and, in the long term, militarily. 

It sometimes surprises people that when I rank my “most plausible chances for the next nuclear weapons use since Nagasaki,” the idea of the US using one is perhaps the top of the list. This isn’t because I think ill of the country, or even because it is the only country to have ever used nuclear weapons in war. It’s because I’ve talked to, and listened to, enough analysts (military and civilian) to get the feeling that there are a non-trivial number of voices out there who think nukes are “usable,” and that in a system where you only need to convince a single person (the President of the United States) of that point of view, then the possibility of them being used is a lot higher than you might think. (My other main “plausible scenarios” are basically “conventional stand-off with Russia leads to Russia using tactical nuclear weapons in combat” and “North Korea thinks we are going to decapitate them so they attack first“; the likelihood of any of these depends, as always, on the context). 

This is why, in my ideal world, I’d like there to be some kind of additional checks in place on the use of nuclear weapons. At some future point I’ll outline what I think an “ideal system” ought to look like (and I’ll write something on whether No First Use gets us there; I’ve got a post on the history of No First Use proposals in the works), but for now I’ll just say that we need to think not only in terms of massive attacks or “crazy Presidents,” but about the pernicious and highly-plausible (if history is any guide) possibility of somebody with just a bit of bad reasoning in the wrong place at the wrong time. 

News and Notes | Visions

Why NUKEMAP isn’t on Google Maps anymore

by Alex Wellerstein, published December 13th, 2019

When I created the NUKEMAP in 2012, the Google Maps API was amazing. It was the best thing in town for creating Javascript mapping mash-ups, cost literally nothing, had an active developer community that added new features on a regular basis, and actually seemed like it was interested in people using their product to develop cool, useful tools.

At left, the original NUKEMAP from 2005; at right, the Google Maps NUKEMAP from 2012.

NUKEMAPs of days gone by: On the left is the original NUKEMAP I made way back in March 2005, which used MapQuest screenshots (and was extremely limited, and never made public) and was done entirely in PHP. I made it for my own personal use and for teaching. At right, the remade original NUKEMAP from 2012, which used the Google Maps API/Javascript.

Today, pretty much all of that is now untrue. The API codebase has stagnated in terms of actually useful features being added (many neat features have been removed or quietly deprecated; the new features being added are generally incremental and lame), which is really quite remarkable given that the Google Maps stand-alone website (the one you visit when you go to Google Maps to look up a map or location) has had a lot of neat features added to it (like its 3-D mode) that have not been ported to the API code (which is why NUKEMAP3D is effectively dead — Google deprecated the Google Earth Plugin and has never replaced it, and no other code base has filled the gap).

But more importantly, the changes to the pricing model that have been recently put in place are, to put it lightly, insane, and punishing if you are an educational web developer that builds anything that people actually find useful.

NUKEMAP gets around 15,000 hits a day on a slow day, and around 200,000 hits a day per month, and has done this consistently for over 5 years (and it occasionally has spikes of several hundred thousand page views per day, when it goes viral for whatever reason). While that’s pretty impressive for an academic’s website, it’s what I would call “moderately popular” by Internet terms. I don’t think this puts the slightest strain on Google’s servers (who also run, like, all of YouTube). And from 2012 through 2016, Google didn’t charge a thing for this. Which was pretty generous, and perhaps unsustainable. But it encouraged a lot of experimentation, and something like NUKEMAP wouldn’t exist without that. 

In 2016, they started charging. It wasn’t too bad — at most, my bill was around $200 a month. Even that is pretty hard to do out-of-pocket, but I’ve had the good fortune to be associated with an institution (my employers, the College of Arts and Letters at the Stevens Institute of Technology) that was willing to foot the bill. 

But in 2018, Google changes its pricing model, and my bill jumped to more like $1,800 per month. As in, over $20,000 a year. Which is several times my main hosting fees (for all of my websites).

I reached out to Google to find out why this was. Their new pricing sheet is… a little hard to make sense of. Which is sort of why I didn’t see this coming. They do have a “pricing calculator,” though, that lets you see exactly how terrible the pricing scheme is, though it is a little tricky to find and requires having a Google account to access. But if you start playing with the “dynamic map loads” button (there are other charges, but that’s the big one) you can see how expensive it gets, quickly. I contacted Google for help in figuring all this out, and they fobbed me off onto a non-Google “valued partner” who was licensed to deal with corporations on volume pricing. Hard pass, sorry.

Google for Nonprofit's eligibility standards

Google for Nonprofit’s eligibility standards — academics need not apply.

I know that Google in theory supports people using their products for “social causes,” and if one is at a non-profit (as I am), you can apply for a “grant” to defray the costs, assuming Google assume’s you’re doing good. I don’t know how they feel about the NUKEMAP, but in any case, it doesn’t matter: people at educational institutions (even not-for-profit ones, like mine) are disqualified from applying. Why? Because Google wants to capture the educational market in a revenue-generating way, and so directs you to their Google for Education site, which as you will quickly find is based on a very different sort of model. There’s no e-mail contact on the site, as an aside: you have to claim you are representing an entire educational institution (I am not) and that you are interested in implementing Google’s products on your campus (I am not), and if you do all this (as I did, just to get through to them) you can finally talk to them a bit.

There is literally nothing on the website that suggests there is any way to get Google Maps API credit, but they do have a way to request discounted access to the Google Cloud Platform, which appears to be some kind of machine-learning platform, and after sending an e-mail they did say that you could apply for Google Cloud Platform funds to be used for Google Maps API.

By which point I had already, in my heart, given up on Google. It’s just not worth it. Let me outline the reasons:

  • They clearly don’t care about small developers. That much is pretty obvious if you’ve tried to develop with their products. Look, I get that licensing to big corporations is the money-maker. But Google pretends to be developing for more than just them… they just don’t follow through on those hopes.
  • They can’t distinguish between universities as entities, and academics as university researchers. There’s a big difference there, in terms of scale, goals, and resources. I don’t make university IT policy, I do research. 
  • They are fickle. It’s not just the fact that they change their pricing schemes rapidly, it’s not just that they deprecate products willy-nilly. It’s that they push out new products, encourage communities to use them to make “amazing” things, and then don’t support them well over the long term. They let cool projects atrophy and die. Sometimes they sell them off to other companies (e.g., SketchUp), who then totally change them and the business model. Again, I get it: Google’s approach is throwing things at the wall, hoping they stick, and believes in disruption more than infrastructure, etc. etc. etc. But that makes it pretty hard to justify putting all of your eggs in their basket.
  • I don’t want to worry about whether Google will think my work is a “social good,” I don’t want to worry about re-applying every year, I don’t want to worry about the branch of Google that helps me out might vanish tomorrow, and so on. Too much uncertainty. Do you know how hard it is to get in contact with a real human being at Google? I’m not saying they’re impossible — they did help me waive some of the fees that came from me not understanding the pricing policy — but that took literally months to work out, and in the meantime they sent a collection agency after me. 

But most of all: today there are perfectly viable alternatives. Which is why I don’t understand their pricing model change, except in terms of, “they’ve decided to abandon small developers completely.” After a little scouting around, I decided that MapBox completely fit the bill (and whose rates are more like what Google used to charge), and that Leaflet, an open-source Javascript library, could make for a very easy conversion. It took a little work to make the conversion, because Leaflet out of the box doesn’t support the drawing of great circles, but I wrote a plugin that does it.

Screenshot of NUKEMAP 2.65

NUKEMAP as of this moment (version 2.65; I make small incremental changes regularly), with its Mapbox GL + Leaflet codebase. Note that a while back I started showing the 1 psi blast radius as well, because I decided that omitting it caused people to underestimate the area that would be plausibly affected by a nuclear detonation.

Now, even MapBox’s pricing scheme can add up for my level of map loads, but they’ve been extremely generous in terms of giving me “credits” because they support this kind of work. And getting that worked out was a matter of sending an e-mail and then talking to a real person on the phone. And said real person has been extremely helpful, easy to contact, and even reaches out to me at times when they’re rolling out a new code feature (like Mapbox GL) that he thinks will make the site work better and cheaper. Which is to say: in every way, the opposite of Google. 

So NUKEMAP and MISSILEMAP have been converted entirely over to MapBox+Leaflet. The one function that wasn’t easy to port over was the “Humanitarian consequences” (which relies on Google’s Places library), but I’ll eventually figure out a way to integrate that into it.

More broadly, the question I have to ask as an educator is: would I encourage a student to develop in the Google Maps API if they were thinking about trying to make a “break-out” website? Easy answer: no way. With Google, becoming popular (even just “moderately popular”) is a losing proposition: you will find yourself owing them a lot of money. So I won’t be teaching Google Maps in my data visualization course anymore — we’ll be using Leaflet from now on. I apologize for venting, but I figured that even non-developers might be interested in knowing on how these things work “under the hood” and what kinds of considerations go into the choice of making a website these days.

Demonstration of two cases of the NUKEMAP fallout dose exposure tool

A simple example of the kind of thing you can do with NUKEMAP’s new fallout dose exposure tool. At top, me standing out my office (for an entire 24 hours) in the wake of a 20 kt detonation in downtown NYC using the weather conditions that exist as I am posting this: I am very very dead. At bottom, I instead run quickly into the basement bowling alley in the Stevens University Howe Center (my preferred shelter location, because it’s fairly deep inside a multi-story rocky hill, on top of which is a 13 story building), and the same length of time gives me, at most, a slight up-tick in long-term cancer risk.

More positively, I’m excited to announce that a little while back, I added a new feature to NUKEMAP, one I’ve been wanting to implement for some time now. The NUKEMAP’s fallout model (the Miller model) has always been a little hard to make intuitive sense out of, other than “a vague representation of the area of contamination.” I’ve been exploring some other fallout models that could be implemented as well, but in the meantime, I wanted to find a way to make the current version (which has to advantage of being very quick to calculate and render) more intuitively meaningful.

The Miller model’s contours give the dose intensity (in rad/hr) at H+1 hour. So for the “100 rad/hr” contour, that means: “this area will be covered by fallout that, one hour after detonation, had an intensity of 100 rad/hr, assuming that the fallout has actually arrived there at that time.” So to figure out what your exposure on the ground is, you need to calculate when the fallout actually arrives to you (on the wind), what the dose rate is at time of arrival, and then how that dose rate will decrease over the next hours that you are exposed to it. You also might want to know how that is affected by the kind of structure you’re in, since anything that stands between you and the fallout will cut your exposure a bit. All of which makes for an annoying and tricky calculation to do by hand.

So I’ve added a feature to the “Probe location” tool, which allows you to sample the conditions at any given distance from ground zero. It will now calculate the time of fallout arrival (which is based on the distance and the wind settings), the intensity of the fallout at the time of arrival, and then allow you to see what the total dose would be if you were in that area for, say, 24 hours after detonation. It also allows you to apply a “protection factor” based on the kind of building you are in (the protection factor is just a divisor: a protection factor of 10 reduces the total exposure by 10). All of which can be used to answer questions about the human effects of fallout, and the situations in which different kinds of shelters can be effective, or not.

There are some more NUKEMAP features actively in the works, as well. More on those, soon.