Carl Robichaud on Reducing the Risks of Nuclear War
Contents
Carl Robichaud co-leads Longview Philanthropy’s programme on nuclear weapons. For more than a decade, Carl led grantmaking in nuclear security at the Carnegie Corporation of New York, a philanthropic fund aimed at strengthening international peace and security.
Carl previously worked with The Century Foundation and the Global Security Institute, where his extensive research spanned arms control, international security policy and nonproliferation.
In this episode, we discuss:
- Lessons from the Ukraine crisis
- How nuclear nonproliferation treaties are enforced
- China’s future as a nuclear power
- Nuclear near-misses
- How effective are missile defence and early warning systems?
- The future of nuclear weapons technology
- The Reykjavik Summit between Gorbachev and Reagan
- The Acheson–Lilienthal Report and Baruch Plan
- Lessons from nuclear risk for other emerging technological risks
- What’s happened to philanthropy aimed at reducing risks from nuclear weapons, and what philanthropy can support today
Carl’s recommendations
- Hiroshima by John Hersey
- Available to read on the New Yorker website
- Fallout: The Hiroshima Cover-up and the Reporter Who Revealed It to the World by Lesley M.M. Blume
- The Bomb: Presidents, Generals, and the Secret History of Nuclear War by Fred Kaplan
- The Dead Hand: The Untold Story of the Cold War Arms Race and its Dangerous Legacy by David E. Hoffman
- A Most Terrible Weapon — podcast produced by War on the Rocks
Other resources
- Zero Days (2016) — documentary about the Stuxnet worm
- Quora answer explaining how Student worked
- The Fog of War (2003) — Errol Morris directs this documentary charting the life of Robert Macnamara
- Defence Science Board report on ‘Resilient Military Systems and the Advanced Cyber Threat’
- Two articles with commentary
Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety by Eric Schlosser
- Two articles with commentary
- Worldschoolers Facebook group
Corrections
A couple minor inaccuracies are par for the course on a nearly four hour podcast! Carl notes the following corrections, none of which change the substance of the interview:
- When naming nuclear states, Carl mentioned England rather than the United Kingdom
- When mentioning states which explored a nuclear weapons program, Carl mentioned Spain. While there is some anecdotal evidence that Spain considered nuclear weapons, they are not included in official datasets
- The editor at the New Yorker at the time of the publication of John Hersey’s Hiroshima was called Harold Ross, not John Ross
Transcript
Fin: Hey, you’re listening to Hear This Idea. On this podcast, we often choose to talk about technologies that look potentially very dangerous or consequential. The thought is that if you want to try to make the world better, one thing you could focus on is reducing the biggest risks from those technologies. It’s notable that we haven’t yet properly talked about nuclear weapons until this conversation. In this episode, I spoke with Carl Robichaud, who co-leads Longview Philanthropy’s new program on nuclear security. For more than a decade, Carl led nuclear security grant-making at the Carnegie Corporation of New York. Before that, he worked at the Sentry Foundation and the Global Security Institute. I should also mention that Carl is currently my colleague at Longview.
We covered a lot of ground, starting with what we can learn about nuclear risk from Ukraine, the significance of the nonproliferation treaty, how verification actually works, and what it takes to develop a nuclear weapons program. We also go over some history, discussing nuclear near misses, the Cuban Missile Crisis, the Cold War, the Reykjavik Summit, the development of missile defense systems, and the treaty on the prohibition of nuclear weapons. Finally, we talk about what’s going on with philanthropy aimed at reducing the risks of nuclear war. This is timely because about half of all the philanthropic work and spending in this area recently got withdrawn with the MacArthur Foundation winding down its program. I was especially interested to hear what kind of work philanthropy today can support. As always, if you’d like to skip to sections that stand out, you can use the chapter markers on your podcast app. Without further ado, here is the episode.
Carl: I’m a program officer at Longview Philanthropy, and I’m co-leading our nuclear grantmaking work. I’ve worked on international peace and security issues and nuclear issues for about a decade now. Before that, I was at Carnegie Corporation of New York where I led our nuclear grant-making program. I’ve joined Longview just in September, and we’re building out a team and a grant-making portfolio to try to reduce the risk of nuclear war, the risk of nuclear arms races, and proliferation, doing that with attention to what it means for the long-term future.
Fin: Fantastic. I look forward to hearing a bit more about what your plans are on that front. I thought a natural place to start, since we’re going to be talking about the risks of nuclear war, is the conflict in Ukraine. I’m sure when people think about risks from nuclear war, their minds are drawn to that. What do you think it means for nuclear risk this century?
Carl: So Ukraine is really a wake-up call because you have this simmering conflict that’s been going on for many years, and it’s got a nuclear element to it. But it wasn’t on the front pages, and we’ve assumed that this nuclear status quo is pretty stable. And all of a sudden, we’re in this active war. Russia has invaded Ukraine, and we hear about threats to use nuclear weapons. And it reminds us that these weapons are there in a really visceral way.
Fin: Yeah, that makes a lot of sense. It seems like it really has brought nuclear risks back to the forefront of people’s minds. Do you think that there are specific lessons we can learn from the situation in Ukraine that might apply to nuclear risks more broadly?
Carl: Definitely. One of the things that we can learn is just how quickly things can escalate. You have a situation where there are miscommunications, misunderstandings, and a lot of uncertainty. And when you add nuclear weapons into that mix, the stakes are incredibly high. Another lesson is the importance of communication and dialogue. Even in a crisis, having channels of communication open can help prevent misunderstandings and escalation.
Fin: Right. And I guess that ties into the importance of treaties and agreements that are already in place, like the Non-Proliferation Treaty. Can you explain why that’s significant?
Carl: The Non-Proliferation Treaty is significant because it’s one of the cornerstones of global nuclear governance. It aims to prevent the spread of nuclear weapons and promote disarmament. The treaty creates a framework where countries that have nuclear weapons commit to disarmament, and countries that don’t have them commit not to acquire them. It’s a way of trying to limit the number of nuclear weapons in the world and reduce the risk of nuclear war.
Fin: And how does verification work in that context? How do we ensure that countries are actually adhering to the treaty?
Carl: Verification is a critical part of the Non-Proliferation Treaty. It’s mostly carried out by the International Atomic Energy Agency (IAEA). They conduct inspections and monitor nuclear facilities to ensure that countries are complying with their treaty obligations. It’s a complex process because you need to balance the need for transparency with the need to protect sensitive information. But it’s essential for building trust and ensuring compliance.
Fin: So what does it actually take for a country to develop a nuclear weapons program?
Carl: Developing a nuclear weapons program is a complex and resource-intensive process. It requires a significant amount of technical expertise, infrastructure, and materials. You need to be able to produce fissile material, which is the core component of a nuclear weapon. That involves either enriching uranium or reprocessing plutonium. You also need to develop the technology to design and build the weapon itself, which involves a lot of scientific and engineering know-how. And then there’s the delivery system, which could be missiles, aircraft, or other means of getting the weapon to its target.
Fin: Thanks for explaining that. It’s interesting to hear how complex the process is. I think it might be a good time to dive into some history. Could you talk about some of the nuclear near-misses, like the Cuban Missile Crisis?
Carl: The Cuban Missile Crisis is one of the most well-known nuclear near-misses. It happened in 1962, when the United States discovered that the Soviet Union was installing nuclear missiles in Cuba. This led to a tense 13-day standoff between the two superpowers, with the world on the brink of nuclear war. Eventually, it was resolved through diplomacy, with the Soviet Union agreeing to remove the missiles in exchange for the U.S. promising not to invade Cuba and secretly agreeing to remove American missiles from Turkey.
Fin: And what about the Cold War more broadly? Were there other close calls?
Carl: Yes, there were several other close calls during the Cold War. For example, in 1983, there was an incident where a Soviet early warning system mistakenly detected an incoming U.S. missile strike. Fortunately, the officer on duty, Stanislav Petrov, decided it was a false alarm and didn’t report it up the chain of command, which likely prevented a retaliatory strike. There were other incidents as well, like the Able Archer exercise in 1983, which the Soviets misinterpreted as a potential first strike by NATO. These incidents highlight how close we came to nuclear war due to misunderstandings and technical errors.
Fin: That’s quite sobering. Moving forward in history a bit, could you talk about the Reykjavik Summit and its significance?
Carl: The Reykjavik Summit in 1986 was a meeting between U.S. President Ronald Reagan and Soviet General Secretary Mikhail Gorbachev. It was significant because they came very close to an agreement to eliminate all nuclear weapons. Although the summit ultimately didn’t result in a treaty, it laid the groundwork for future arms control agreements, like the Intermediate-Range Nuclear Forces Treaty. It showed that even during the Cold War, there was a willingness to engage in dialogue and consider significant reductions in nuclear arsenals.
Fin: And what about missile defense systems? How do they fit into the picture?
Carl: Missile defense systems are designed to detect and intercept incoming missiles. They can be a double-edged sword. On one hand, they can provide a sense of security by potentially neutralizing a missile threat. On the other hand, they can also destabilize relationships between countries. If one country develops a robust missile defense system, others might feel the need to build more offensive missiles to ensure their deterrent remains credible. So, while missile defense can be part of the security architecture, it also needs to be carefully managed to avoid escalating arms races.
Fin: Lastly, let’s talk about the Treaty on the Prohibition of Nuclear Weapons. What’s its role in today’s nuclear landscape?
Carl: The Treaty on the Prohibition of Nuclear Weapons is a relatively new treaty that aims to completely ban nuclear weapons. It was adopted in 2017 and entered into force in 2021. Its role is to stigmatize nuclear weapons and create a legal norm against their use and possession. While the treaty hasn’t been signed by any of the nuclear-armed states, it represents a growing movement among non-nuclear states and civil society to push for disarmament. It’s part of a broader effort to reduce the role of nuclear weapons in international security.
Fin: That’s a great overview. Finally, can you tell us about what’s going on with philanthropy aimed at reducing the risks of nuclear war?
Carl: Philanthropy plays a crucial role in supporting research, advocacy, and policy work to reduce nuclear risks. Recently, there have been some changes in the landscape, with the MacArthur Foundation winding down its nuclear security program. This has created a gap in funding, which other philanthropies are trying to fill. At Longview, we’re looking at supporting work that addresses the underlying drivers of nuclear risk, such as geopolitical tensions and technological developments. We’re also interested in supporting efforts to strengthen international norms and institutions that govern nuclear weapons.
Fin: Thanks so much for sharing your insights, Karl. It’s been really informative. As always, if you’d like to skip to sections which stand out, you can use the chapter markers on your podcast app. And without further ado, here is the episode.
And I think that the risk of nuclear use is a lot higher than it was a year ago, both in the short term while the conflict rages and for the longer term. This is inevitably going to have implications in terms of arms races and proliferation pressures. Now we have a say in what that story looks like. Nothing is inevitable. And I want to emphasize that when I say the risk of nuclear use is higher, I don’t mean that it’s high. We could talk about that. We could talk about what we think the likelihood of nuclear use is, but it’s surprising to see a war like this in Europe in 2022. I think even people who are relatively pessimistic about Russia’s intentions with regard to Ukraine were surprised when Russia actually went ahead with its invasion. I think we saw it coming several months before. It became pretty clear that they were really serious about that. But if you were to look back a year or two ago, most people wouldn’t have thought that. Most people thought that Putin was bluffing. And I think this is a reminder that we need to be humble about what we know or what we think we know. In retrospect, all the signs are there. But I think it was hard to believe them, because those who knew how seriously Russia took the issue of Ukraine and how serious it was about keeping Ukraine in its orbit, they knew that Putin had the motivation to do this. But those analysts also knew that the state of readiness of the Russian forces meant that such an attempt would probably fail. And they believed that even Putin wouldn’t play such a foolish bet. So I think there again, we need to be really cautious about what we think we know and what the likelihood of events are within world politics.
Fin: Got it. So, yeah, one thing I take from that is, this conflict in part is a kind of news, or at least it’s raising some kind of awareness. And in particular, one article of news is just appreciating that world actors sometimes act irrationally. Which is a shame if you’re trying to be confident about what they’re going to do next in these contexts. And also, if you’re worrying that they might do very dangerous things.
Carl: Yeah. Rationality is a tricky word. Right?
Fin: Right.
Carl: And it means different things in different contexts. Clearly, Putin and the leaders around him have goals, and they thought that the best way to achieve these goals was to invade their neighbor. It seems foolish given the way the war has gone so far for them. But we know that human decision making is fraught with all kinds of failures. We rely on heuristics. We fall prey to all kinds of fallacies, especially in a decision-making structure like in Russia, where decision-making authority is really concentrated. You have one man who makes a lot of those decisions and is surrounded by a lot of other yes men and people who may not be telling him the truth all the time.
You really leave yourself open to some bad decisions. And I will say that Putin miscalculated, but I think that the West also miscalculated. Because whatever we thought we were doing in order to deter Putin wasn’t sufficient to deter Putin.
And if we thought that providing Putin some outlet for his goals would be successful, we clearly didn’t signal that as well. So, yeah, there are two approaches to Russia’s coercive threats. One is to try to provide some kind of accommodation to Russia. For example, a plan to neutralize Ukraine. The other is to bolster Ukraine and reinforce it against a Russian invasion as a way to deter Russia. And we didn’t pursue either of those strategies effectively, and that’s why we have a war now that is bad for Russia. It’s really bad for Ukraine. It’s bad for all of Europe. It’s bad for the global economy.
Fin: That seems right. And one thing you mentioned there, which I thought you mentioned before and really stood out, is this feature of nuclear risk, which I guess makes it distinctive from something like, let’s say, risks from climate change, where so much risk is concentrated, at least a lot of time, in the hands of very few people. Often people acting under extreme stress over very short periods of time. That means some of these questions become very difficult.
Carl: Absolutely. And I think that is one of the distinctive things about nuclear risk is that, fundamentally, nuclear weapons are about manipulating risk. Deterrence requires you to create fear in your adversary so that they do what you want them to do, or that they don’t do what you don’t want them to do. Inherently, you’ve got to create that fear and anxiety, or deterrence doesn’t work. And so we are putting a lot of stress on decision-makers, and that’s all by design because deterrence is ultimately about psychology. And I think it’s a system that’s really prone to failure. Fortunately, when it comes to nuclear weapons, we haven’t had any deterrence failures in the first 77 years of nuclear weapons. We haven’t seen nuclear weapons detonated in war since 1945, but we shouldn’t take that track record for granted. Because if you look at some of those incidents, we came pretty close.
Fin: Yeah. One thing that makes me think of, we’re thinking about what the strategy is to maybe make the other actor think that you’re likely to retaliate on a first strike. And a lot of the answers are really quite worrying, right? So, you know, one strategy is to somehow credibly signal that you are irrational, right? This is kind of madman theory. Because that way, you know, the other actor can’t trust that you’ll do the kind of self-preserving thing of not retaliating. Another answer is to show that you have a higher risk tolerance than the other. As I guess, Thomas Schelling and others were talking about this kind of thought. I think one example is, you know, you’re handcuffed to someone near the edge of a cliff. And the winner is the person who gives in last. What do you do? His answer is, you just dance closer and closer to the cliff. And maybe in some sense, that’s, you know, that’s sort of, like, correct strategy, but it’s also a very worrying answer when the stakes are so high.
Carl: So I think Putin miscalculated here about the West’s response to Ukraine because he believed that the West would see how much more important Ukraine is to Russia than it is to the West. And I think he was manipulating this kind of asymmetric stakes.
In a way to keep the West out of the war. And it’s partially succeeded. We don’t see NATO troops on the ground, and we see the West limiting its commitment to Ukraine in some important ways. But I think that he miscalculated the level of resolve even from countries like Germany that are deeply paired with Russia economically. Germany has been willing to pay a high price in order to continue to support Ukraine throughout this crisis. And I think you can look at all of this as bargaining and bargaining with imperfect information. And as we know, sometimes deals go wrong.
Fin: Yeah. It’s a good answer. I guess, you know, points to a kind of value of committing to some kind of principle of defending your neighbor or forming alliances. Even when, strictly speaking, if you’re defending someone else, there’s not much in it for you in this kind of very direct sense. But if you can really commit to doing that, then it’s unlikely your neighbor will be attacked in the first place. Yeah.
Carl: I mean, I think you can understand why Russia thought that Ukraine might be a viable target and that the West wouldn’t respond because Ukraine is not part of NATO. Ukraine was not going to join NATO anytime soon. The US had very clearly limited the types of weapons it was willing to sell to Ukraine and the types of support it was willing to provide.
What might cause nuclear weapons to be used?
Fin: So, in terms of nuclear risk, some threats have been exchanged about the use of nuclear weapons in the context of Ukraine. I guess I’m curious to zoom in a little bit and hear how you think about where the risk is most likely to come from. What kind of story is most likely to play out where things go wrong?
Carl: So I think it would be good to go back to the question of why Russia would use nuclear weapons in Ukraine.
I think the simple version to that is that Russia would use nuclear weapons if it felt it would advance its national interest, both in the short term and the long term. Now we know what Russian nuclear doctrine says. Russian nuclear doctrine prescribes the use of nuclear weapons and limits it to some very specific scenarios, in which nuclear weapons are used against Russia or the very existence of the Russian state is at risk. Right?
But I know a lot of Western analysts don’t put a lot of weight in that because that’s a military doctrine, and, ultimately, there is a decision maker in Russia who is not going to be bound by doctrine if he feels like the use of nuclear weapons will help him achieve his goals. Now, the use of nuclear weapons for Russia, I think, would be very costly, and it would break the nuclear taboo, which I think Russia would not want to do. It would isolate Russia with what are some of the neutral countries right now within this conflict. You might think about China or India. So it would have costs in that way, and it could result in direct retaliation from NATO forces.
And NATO is under no obligation to respond to a nuclear strike against Ukraine, which is not a NATO member. That’s why Russia might think it could get away with a nuclear strike. At the same time, the US has said that the use of nuclear weapons in Ukraine would result in a catastrophic response. So Russia has to weigh that. Are they serious? What does catastrophic mean, etcetera? Right now, I don’t think Russia is seriously considering the use of nuclear weapons.
I think the scenarios in which Russia would consider the use of nuclear weapons are those in which it’s losing badly on the battlefield. It’s getting pushed back and facing a humiliating defeat. We are closer to that than we were 6 months ago, but we’re not near that point yet, I think.
So, yeah, some people talk about the strategy of escalate to win or escalate to deescalate. Russia would use nuclear weapons as a signal to the West: don’t let this thing get out of control. Let’s negotiate a settlement, and then Russia can achieve something that is short of defeat. It would be short of victory, but short of defeat. Putin would remain in power, and it would be a way to create an off-ramp from the conflict. I don’t think that’s a likely scenario right now. And, honestly, it comes down to Putin’s thinking and the thinking among his cadre of decision-makers.
Fin: Yeah. So maybe we could zoom out of Ukraine, think a bit about risks from nuclear weapons more generally over the next few decades. So maybe a first question here is, when we’re thinking about these risks, who are the main players that we should be thinking about?
Carl: Well, I think you’d start with the countries that have nuclear weapons: US, Russia, France, China, England, India, Pakistan, Israel, and North Korea. Those are the 9 fingers that are on the button right now. But there are a lot of other countries that are involved as well in nuclear risk. Think about the countries that are under the nuclear umbrella, as it’s called. That includes a number of NATO countries as well as some countries in the Western Pacific. Every country in the world has a stake in how these weapons are used and in the laws that govern them. A large nuclear war would not just affect the combatants; it would have cascading effects that would transform the world. And I think there was a recognition of that in the 1960s after the Cuban Missile Crisis, where we came very close to the use of nuclear weapons. We’ve had these weapons for 77 years, and so far, they’ve only spread to 9 countries. I think this is one of those underrated success stories. If you were to ask someone in the 1950s how many nuclear weapons they think there would be in 70 years, I don’t think many people would answer 9. I think this is a case of successful governance, especially the Nuclear Non-Proliferation Treaty, which limited proliferation, as well as bilateral diplomacy, especially between the US and the Soviet Union.
Even as they were competing in an arms race, they were working to limit the spread of these weapons and to avoid their use, especially after the Cuban Missile Crisis, which I think was a wake-up call for everyone. And so I think it’s strange to think about nuclear weapons as a success story. In some ways, they are. There is unfinished business because the status quo is not stable. We have a lot of weapons still. We don’t have as many as we did during the Cold War. In fact, we had about five times more at the height of the Cold War. That’s another success. We’ve been able to bring the number of weapons down substantially. But we take that for granted, I think. We assume the status quo is stable and can continue indefinitely. But there are many ways in which this fabric is pulling apart at the seams.
Fin: Okay. Yeah. I’d love to zoom in on that. So you say that the status quo is not stable. I can imagine lots of ways that might be true. But what did you have in mind then?
Carl: So first, you have an international treaty in the nuclear nonproliferation treaty, which fundamentally separates states into two categories, the nuclear haves and the nuclear have-nots. Five states are permitted to have nuclear weapons and retain them indefinitely. They need to make certain promises to move towards the cessation of the arms race and eventually towards full and complete disarmament. But there are no procedures or rules in place for how that should occur. It essentially locks in this nuclear monopoly for a small number of states, and everyone else has to accept that. That doesn’t seem like a sustainable situation for the long term, and it’s created a lot of tensions and pressures. And, you know, we know that there were 29 countries that have pursued nuclear capabilities at one point or another. Nineteen countries had pretty serious nuclear weapons programs.
Fin: Oh, wow.
Carl: Only nine of those have succeeded. But, you know, these are countries you don’t think today of having nuclear weapons programs, like Sweden, Switzerland, Spain, Taiwan, South Korea, etc. As well as countries like South Africa, which gave up its nuclear weapons.
Fin: How many countries developed and then subsequently gave up nuclear capabilities?
Carl: So South Africa is the key example. After the apartheid government fell, they gave up their nuclear weapons, which was a relatively small program. And you also have the post-Soviet states of Kazakhstan, Belarus, and Ukraine, which hosted Soviet nuclear weapons and afterwards gave those weapons up. Ukraine gave up the Soviet nuclear weapons that were based on its territory in return for a security assurance from Russia, the United States, France, and England. The Budapest Memorandum stated that they were going to give these things up in exchange for economic aid, support, and a promise that they wouldn’t be invaded. Russia has violated that with its recent invasion, and that’s one of the reasons why people believe that ensuring Russia doesn’t succeed in this war is really important from a nonproliferation standpoint.
Fin: Yes. Maybe I could try saying that back to make sure I’m getting it. So, you know, the question here is, in what sense could the status quo here be unstable?
And, you know, one sense is that, in a way, we are kind of lucky that only, you know, 9 states have a nuclear weapons program. And that is largely underpinned by this international regime, by the non-proliferation treaty, and a couple of subsequent treaties where countries are kind of separated into countries with a program and without. The agreement is not to start a program, and there is some kind of quid pro quo there, where non-nuclear states get some kind of protection in return. But that is a kind of fragile state of affairs, where there is a certain amount of resentment you could imagine from non-nuclear countries who feel like they’ve been locked out by this kind of arrangement. You could see that maybe becoming a little unfrayed by Russia. Also, on the other hand, some states have given up programs, so you can have movement on either end, but it seems especially unstable in the direction of states trying to maybe get a nuclear program. Does that sound right?
Carl: I think it is fragile, and the reason it’s fragile is that it’s not that hard to build a nuclear program and to develop a nuclear weapon. Basically, there are 20-something countries that have an advanced military-industrial base and have nuclear power plants or other nuclear infrastructure that would give them access to fissile material. We have a safeguard system in place with the IAEA to monitor that, to make sure that civilian nuclear material is not used for military purposes. But we know that those protections are imperfect, and a state could say, as North Korea has done, that we don’t want to be part of that agreement anymore. We’re leaving, and we’re building nuclear weapons. And the way the laws work, they permit you to use civilian nuclear technology legally right up to the point where you’re building a bomb. This is the loophole that Iran has exploited to get very close to building a nuclear bomb if it decides to cross that threshold. The laws and rules in place are not sufficient. The reason we only have 9 nuclear-armed states is because of the diplomatic processes behind that. Basically, the United States has pressured its allies. The Soviet Union pressured its allies not to acquire these weapons and offered some security assurances in return. But all of those arrangements are fragile because they depend on an IOU, essentially. You are making a promise, and if that promise is no longer credible, you could see countries deciding to go ahead and cross that line. You hear talk now about whether South Korea, for example, would go ahead and acquire nuclear weapons. They’re certainly capable of it. Japan has a lot of separated fissile material on its territory that if it chose to, it could use for a nuclear bomb. There’s a strong taboo against nuclear weapons in Japan. I don’t think that’s likely, but it’s certainly plausible, especially on longer time horizons. You hear talk about Saudi Arabia, for example, and their ability to acquire a nuclear bomb from Pakistan. So we shouldn’t take any of this for granted.
Fin: And, you mentioned the IAEA there, like, as far as I understand, which isn’t very far. They are a body which tries to help on monitoring and verification.
I’m curious just more generally, beyond diplomacy, what does verification look like? How much is it possible to know about whether states are trying to acquire weapons?
Carl: Technically, it’s possible to know a great deal. If the IAEA were able to put in place the types of intrusive safeguards that are technically possible, their nuclear forensics are first rate, and you can do a lot to monitor and verify the absence of illicit activity, at least at declared facilities. The problem is that the IAEA doesn’t have access to everything that’s going on in these countries. The board of governors of the IAEA is very careful not to give the IAEA too much authority for fears of violating a country’s sovereignty. About 15 years ago, there was an undisclosed facility in Syria believed to be a plutonium facility built by North Korea for Syrian use. Israel bombed that facility, and the IAEA asked to come in and inspect it. Syria dragged its feet and eventually dragged its bulldozers over the whole area to hide any evidence that might have been there. So we will never really know exactly what went on at that facility. We know the story of Iraq, and the IAEA was correct that Iraq did not have a nuclear weapons program. But the IAEA also was not able to persuade the world that Iraq didn’t have a nuclear weapons program because it didn’t have access to everything. It was at the whims of Saddam Hussein and his team what they allowed the IAEA to see. Technically, it’s a great agency. They’ve got great people working there—courageous, competent people for the most part—but they have limits on what they can do. And it’s only going to get harder as technology allows people to build things faster and with greater precision. Verification capabilities are increasing as well, but if the IAEA is not allowed to use those, it’s got its hands tied.
Fin: Okay, I get it. Maybe this is getting too into the weeds, but I’m curious. For countries which do let the IAEA look around, monitor, and verify that country, what kind of technological means do they have, and what are they looking for? What are the indications?
Carl: Yeah. So, primarily, it’s an accountancy role. They are keeping track of the fissile material in these countries and ensuring that none of it goes astray. There is also the potential for additional inspections that would be more intrusive, and there’s a requirement that all declarations be not just correct, but complete. That’s the hard part for the IAEA—to verify the completeness of what it’s looking at. For example, in Iraq, they were able to take a look at all of Iraq’s known facilities in the early 1990s. Then after the Gulf War, they realized there was a hidden facility just over the berm that the IAEA had never been allowed to visit. This led to calls for the additional protocol and what’s sometimes called the state-level concept, which would allow for a more complete and holistic picture of everything going on within a country’s nuclear program, but that’s controversial.
And it’s being held up by countries like Russia and Iran that prefer to have more sovereignty and not have the IAEA involved in their business.
Fin: Okay, I see. And what about testing? I’m kind of naively assuming that if a state was to acquire weapons, then first of all, it needs to test them. And second of all, it’s very difficult to hide evidence.
Carl: Yeah, exactly. So you have the Comprehensive Test Ban Treaty Organization. Now, the treaty has not entered into force. It has a very high bar for entering into force, which requires ratification by a large number of states. Some of those states, including the United States, have not ratified the treaty.
So it has not entered into force, but there’s this organization in Vienna that continues to operate. They have a lot of technological prowess, a very important mission, and they’re very good at detecting nuclear tests and separating out the types of tremors generated from a nuclear test from those generated from an earthquake. I think if you talk to most technical people, they have a high degree of confidence that the CTBTO could detect covert tests, at least any kind of explosive nuclear testing that crosses a certain threshold to be militarily useful.
Fin: I see.
Carl: Any country’s first nuclear test would definitely be detectable by the CTBTO. The question is, would some sophisticated tests by advanced nuclear nations like Russia, China, or the United States at the subcritical level be detected? I think they could, but there’s less confidence in that.
Fin: It sounds like, to some extent, there is a kind of offense-defense race where, presumably, it will become easier to more covertly manufacture weapons. But at the same time, it might also become, and it has already become, easier to get indications that a state is trying to acquire weapons.
Carl: Yeah, I think that’s right. Especially when you consider the full range of information that’s available now.
You have extraordinary overhead imagery, all kinds of signals intelligence, and other forms of data that can be used. Now, the IAEA is limited in its use of these capabilities, but nation-states use them regularly to try to understand whether a particular country is trying to acquire nuclear weapons. So I think there is an arms race there, and I feel confident that we’re not going to wake up one day and discover that some country nobody has been talking about has a nuclear weapon.
I think the more likely route to a new nuclear weapon state is the one that Iran is taking, where it uses permitted technologies to come very close to
Fin: I see.
Carl: A nuclear bomb and then crosses that threshold when it wants to. Now I want to be clear. I think that Iran has crossed the line and has engaged in some prohibited activities in the past, and that’s how they came to this negotiation and led to the Joint Comprehensive Plan of Action, which puts very stringent inspections on Iran even beyond what is required by the IAEA. Mhmm. But got it.
The broader point is that a country acting like Iran could follow the rules up until the point where they wanted to cross that threshold and build a nuclear weapon.
Fin: Okay. And it sounds like, up to that point, to be confident that such a country hasn’t crossed the threshold, that country needs to submit to verification measures and inspections. It’s still quite difficult to know from the outside. Got it. Now, I guess we were talking about offensive and defensive technologies. I was originally gonna say this is a kind of defensive technology, but I suppose it’s a little complicated whether this actually makes the world safer. But I was gonna ask about missile defense technologies or anti-ballistic missile capabilities, I guess. Yeah, I’m curious what the state of the art looks like for these kinds of defenses. Whether they’ve been tested, whether they’re likely to work. Yeah. What’s going on?
Carl: Yeah. This is a controversial topic, and I think it’s one of the factors that could drive this new arms race that I’m talking about.
The United States, in particular, has been developing increasingly sophisticated missile defenses to protect the US homeland from an incoming attack. Mhmm. And that is inherently a good thing, but it has downsides because if you have a shield that’s threatening the viability of an adversary’s nuclear arsenal…
Then they are at your mercy in a way. And so it has this inherently offensive dimension to it because you could use your nuclear weapons first with impunity if you had a sufficiently effective shield.
Fin: Mhmm. Now I see.
Carl: The current US missile defenses are really limited. They’re not that effective. They’re very expensive. We don’t have that many interceptors. Their success rate is somewhere around 50/50 in tests that are preplanned. And so we don’t know what that success rate would be in real-world conditions. But they have gotten better, and every year, they continue to get better. The US has thrown a lot of money at this problem. It has bipartisan support in Congress. And so there are a couple of risks here. One is you may have a false sense of security. And from some of the statements that President Trump made when he was in office, it’s clear that he either believed or wanted us to believe that he thought these missile defenses were much more effective than they actually were. Mhmm. And that could lead you to take risks that result in a nuclear exchange. The other risk is that the other side sees you building a shield. They’re not going to stand by and allow their nuclear arsenal to be negated. They’re going to try to find ways around that. And that’s exactly what we see with Russia and China, and their development of new weapon systems that are specifically designed to get around US missile defenses. So you talk about nuclear-powered cruise missiles or hypersonic glide weapons, or the Status-6 Poseidon nuclear-powered hypercavitating torpedo. Right? These are weird esoteric systems that someone pulled off the shelf and just built this crazy thing or is planning to build this crazy thing. I mean, what they have in common is that they are designed to get around US missile defenses. Does Russia have reason to fear US missile defenses? From where I’m sitting, it looks like paranoia. The US systems are not large. They’re not very capable.
And it makes you wonder what’s the reasoning in Russia. Russia has 1,500 deployed nuclear warheads on the strategic level. It has another 1,000 tactical nuclear weapons. It doesn’t seem like they would need to add to that arsenal to have security. So I think it’s a complicated story, and part of it is maybe they’re building up some bargaining chips to trade away. Part of it might be a military-industrial complex run amok.
I think China’s a more complicated case because China has had, for a long time, a very small nuclear arsenal. And this has been sort of a mystery as to why this large power, even as it was developing a very capable military and a large economy, had such a small nuclear arsenal, somewhere in the range of 200 nuclear weapons, and only 50 of those or so could reach US territory. A lot of people pointed out China’s doctrine of minimum deterrence, no first use, traced back to Mao’s vision of nuclear weapons. He saw nuclear weapons as these paper tigers that were not really worth investing in because you could just destroy a couple of an adversary’s cities, and they would have to submit. Something changed because we now see China building a lot of nuclear weapons at a surprising pace. The latest Pentagon estimate is that China has about 400 nuclear weapons now and got there a lot faster than people suspected.
Fin: Okay.
Carl: The expectation is within the next decade, they will go to somewhere around 1,000 nuclear weapons, and this reflects a major change.
Fin: Yeah. Can I ask, what do you think could change that?
Carl: I think missile defense is a big part of this. They see US missile defenses and their improving capabilities, and they don’t want to be caught by surprise. There’s this great fear of strategic surprise. So, you know, all of a sudden, the US has 50 interceptors of type 1.0. Next thing you know, they have 50 interceptors of type 1.2. And then next thing you know, they have 1,000 interceptors of type 3.0. China doesn’t want to be on the wrong end of that equation. There is also reason to believe that China would be vulnerable to a very effective, precise US first strike using nuclear, conventional, and cyber weapons. I don’t think that anyone in the US is planning this. It’s not an attack the US intends to carry out. But if you look at US capabilities and the way that the precision of US conventional forces and nuclear forces has increased
Fin: Mhmm.
Carl: really substantially as a result of the digital revolution. And you think about all of that revolution in military affairs, the precision weapons coupled with electronic warfare, cyber warfare. The US way of war is to try to blind the adversary, to go in as soon as possible, knock out their communications.
Fin: Exactly. Yeah.
Carl: And that’s the scenario I think China needs to plan for even if they don’t think it’s likely because you don’t keep your job very long as a nuclear planner if you don’t plan for the worst-case scenario.
Fin: Yep. Yeah. Right.
Carl: And you have to look at US capabilities, not just its intent, and you have to assume that those capabilities are actually better than what they look like on paper. Yeah.
Fin: So I guess, on the topic of missile defense, have you heard of Project Retro, Carl?
Carl: No, I don’t know about this.
Fin: Okay. This is a side note, I guess. I was reading The Doomsday Machine, the Ellsberg book. Yeah. And Daniel Ellsberg, of Pentagon Papers fame, was working at the RAND Corporation, right? This is the early sixties or maybe 1960. The US military, maybe the Air Force, had asked them to evaluate this plan they had that got fairly far along with different layers of planning. It was a missile defense plan. And here’s the plan: you take about 1,000 ICBM rockets, I think they’re Atlas rockets, and you strap them to the ground horizontally, pointing them in the opposite direction to the Earth’s rotation. Then, when you detect an attack, your early warning system gives you the warning, and you press fire on these rockets. Because they’re strapped to the Earth, they slow down the Earth’s rotation just enough that presumably Soviet missiles overshoot their target and fall in the sea or whatever.
Carl: That is a wild scheme. I had not heard about that one.
Fin: Yeah. I just had no idea. It’s just kinda crazy that it got beyond someone just having an idea and then immediately realizing it would never work. And of course, well, listeners, it does not work. You need something like, you know… No. A feeling that it doesn’t.
Carl: I could hear you. The Earth is a pretty big object.
Fin: Yes. Citation needed.
Carl: Citation needed. Yeah. There’s a history of DARPA that I read recently, The Imagineers of War. It talks about a lot of other harebrained schemes like these, some of which got funding.
Fin: Great. So there’s a lot to unpack there, but, you know, I guess one piece is we were talking about missile defense. And at least I might naively think, well, we have this technology to defend against these terrible weapons. Presumably, that will make the world safer and more stable. But if you just think about the response, beyond the immediate effects, other actors might compensate with new weapons technologies. But also more crudely, if my adversary can defend 4 out of every 5 weapons I send their way, well, that’s a reason for me to make or at least send 5 times as many weapons.
Carl: Exactly.
Fin: Maybe that kind of dynamic is playing out with China.
Carl: Well, we saw that dynamic during the arms race in the Cold War.
Fin: Right. Yeah.
Carl: A sense of vulnerability fueled a dramatic increase in weapons. The US would build more weapons to try to be able to hit the targets it needed to hit in the Soviet Union in order to reduce its vulnerability.
And it was this game that got out of control because the Soviets knew they needed to build more weapons in order to be resilient to that.
Fin: The Cold War example, tell me if I’m wrong, but I have some picture that the Soviets maybe overestimated the capabilities of the Star Wars idea and kind of overcompensated for that by adding to their stockpiles. Whereas, really, Star Wars never became very effective.
Carl: You’re absolutely right. Star Wars was this harebrained scheme that even the people who were working on it didn’t really think was working. I mean, they were happy to cash the checks, but a lot of them were very skeptical of the pursuits they had. Now that doesn’t mean that missile defense could never work, and there are certain approaches to missile defense that might be more promising. If we were ever able to master boost phase missile defense, for example, to take out a missile on the launch pad or in the initial stages when it’s moving relatively slowly.
Fin: Okay.
Carl: Directed energy, you know, lasers or microwaves to fry components. That’s something that could work. Space-based missile defense is something that I’m worried people will take off the shelf and try to master because I think it’s likely to lead down some really risky paths. But, you know, everything old is new again. As we enter this uneasy phase of great power competition, in which Russia is in Ukraine, China is threatening Taiwan, and the US has a massive military with incredible capabilities, we’re in a slow-motion arms race. You can see Russia dusting off some of these crazy Cold War concepts and putting them into operation. You see China revising its nuclear arsenal and going from a very small minimum deterrent up to something that will be on par with the current arsenals in Russia and the United States within a decade or two. It’s a different world, and we can’t assume that this relatively benign status quo that has persisted since the early 1990s is going to be with us into the future.
Fin: I see. And just quickly on China, I was wondering what are the reasons for being confident that China looks set to increase its stockpiles? In other words, what is the evidence to think that it’s likely to do that?
Carl: China is not very transparent with regard to its nuclear arsenal. So they haven’t said that they’re increasing in this way. They’ve said nothing. The only evidence that we have is what has been stated by the US intelligence community. You always have to take that with a grain of salt, and I think people have been skeptical in the past about some of these worst-case scenario estimates. But all that changed about a year ago when a group of open-source analysts using commercial satellite imagery noticed a new missile field in China with something like another 100 missiles that were under preparation. None of China’s explanations for what this facility is are nearly plausible.
So we’re in a new world now where citizen journalism and citizen geospatial analysis, social media analysis can gather information and add to the debate.
It was the Federation of American Scientists and the Monterey Institute for International Studies, Middlebury College, that those researchers made this finding public. And I think it’s really changed the tenor of the conversation.
Fin: Wow. Okay. And I suppose we might expect that to happen more in the future as this kind of open-source intelligence.
Carl: Yeah. If you look today, the types of commercial satellite imagery that you can buy off the shelf for a few thousand dollars, the quality of those images is better than what the best Cold War
Fin: Right.
Carl: satellite imagery looked like.
Fin: Uh-huh.
Carl: And so anything visible from the sky, it’s very hard to conceal it now. You can find ways to hide things in this. Again, it’s this cat and mouse game.
But, you know, that’s just what’s available commercially. You have to assume that the intelligence agencies have things that are an order of magnitude better. For sure. Including overhead imagery that can penetrate underground. The same technology that’s used to uncover new archaeological discoveries can be used to look at buried and hardened facilities in different countries.
And that’s not even getting into some of the cyber, you know, cyber intrusion, signals intelligence, and other things. So, I think we should assume that it’s going to be easier to find things going forward. The question is whether it’s also easier to build things. And on the defense. Right?
Fin: Yeah. Got it.
Carl: Part of the problem is, you know, you might find something. You might know about it, but what are you gonna do about it? Right? And that’s what’s happened in Iran. We’ve known that they have all these facilities, including some facilities that they had not disclosed to the IAEA. But are you going to use military strikes against those? And what law has Iran violated at this point?
Fin: I see. And, I will say, by the way, the idea of being able to penetrate underground with overhead imagery. I’m assuming this is just from orbit, like from satellites.
Carl: In a plane.
Fin: Yeah. Wow. Yeah. And then you also mentioned various kinds of cyber intrusion. That was actually something I was keen to ask. I guess there’s actually a few questions here. But maybe the first one is, whether it’s harder or easier to, kind of, find out what other actors are doing because of cyber hacking. Were there any examples of that? And then also, like, you know, hacking the bomb type worries. They feel kinda separate in my mind, if that makes sense.
Carl: They are separate. I think there’s two separate categories of cyber, and they overlap. And one is, essentially, cyber espionage.
And I think that if you look at the Iran situation, we had cyber espionage in place, which allowed us to— I say we. The United States had cyber espionage in place that allowed it to track developments, including keystroke tracking that was recording what the Iranian analysts at facilities were typing into their keyboard.
Fin: Okay.
Carl: And so that’s… Does this have to do with Stuxnet, or was that a separate thing?
Carl: It’s separate, actually. Yeah.
Fin: Okay. Alright.
Carl: It does have to do with Stuxnet in that shock prompted the US and Israel, allegedly, to implant this virus into the centrifuge facility. It was a very sophisticated virus that was not easily detectable by Iran because it didn’t just explode things. Instead, it made the centrifuges… Right. Covetous tracks. Yeah. So at first, it looked like it was just typical technical errors, and it set back the Iran nuclear program by months or years, depending on who you talk to. However, because of that program, Iran eventually realized what was up. It wiped all of its computers, installed new systems, and implemented much more rigorous cybersecurity around its facilities.
Fin: I see.
Carl: And in so doing, they wiped away all of the key logging and other espionage systems that the US had in place at that point. Presumably, the US came up with some other ways to try to get into those systems, but, again, it’s one of these cat and mouse systems, and you never really know because it’s all classified. The US still to this day denies that there ever was Stuxnet. Right?
Fin: Mhmm.
Carl: Yeah. There’s a great movie about this called Zero Days that gets into that.
Fin: Okay. Great. We’ll link to that. I also remember reading this post somewhere. It was a response to the question, what is the most sophisticated piece of software ever written? And someone had told the story of Stuxnet, you know, quite briefly, and it just kinda blew my mind. So I’ll link to that as well.
Carl: Yeah. I think it was a really sophisticated piece of malware, and it was introduced through an industrial control system, so it spread outside of the sandbox, which is why it was ultimately detected. Their attack vector made it infect a lot of other things.
Cyber espionage
Fin: I see. So we’re talking about cyber, and you mentioned there’s really two parts to this. The first is cyber espionage, signals intelligence. Would it be right to say that on balance, the fact that, I suppose, the entire world is more open and vulnerable to various kinds of cyber espionage means it might be harder to carry out some kind of covert project, whether it’s nuclear or otherwise?
Carl: I think so. If intelligence agencies know where to look, they have a lot of capabilities to get a hold of information. But part of it depends on whether that information reaches decision-makers in a timely way and whether it’s actionable.
Fin: Yep. And I guess it’s kind of on that point, you know, it’s worth noting that, like you mentioned, some very important intelligence has just come from open sources such as academics rather than governmental intelligence.
Carl: Yeah. I mean, a lot of this stuff was hidden in plain sight. One of the ways to determine which countries were pursuing dangerous dual-use nuclear research was to look at the publications by key scholars because to get academic advancement within these countries, they would need to publish.
So they would publish their research on this. And if you knew which journals to look in, you could find the scholars, and then you could create a network map of which scholars were working with which people. Woah. And that helped uncover some of the risky research being done by Iraq, Iran, and other countries.
Fin: Okay. Got it. So that is the cyber, espionage, and intelligence side of things. I guess when we’re talking about cyber, the other part we might worry about is the more direct risk from the possibility of hacking into a nuclear weapon system. Yeah. And triggering it. Yeah. I’m actually completely in the dark here about how real that risk is.
Carl: Yeah. You’re not alone. I mean, I think that nobody knows the answer to this as it pertains to every nuclear-armed state. And the people who know about the cyber resilience and vulnerabilities of their own systems are not going to talk about them. So it’s all speculation. Now we do know in 2013, the Defense Science Board, the United States Defense Science Board, claimed that all weapon systems, including nuclear weapons, were capable of being digitally penetrated. So this is something that has been publicly revealed, and that was 9 years ago. Now they went ahead, I’m sure, and patched those specific vulnerabilities, but it shows that this can happen even in very sophisticated nuclear weapon states. And as the US continues to upgrade its arsenal and its nuclear command and control from 1960s and 1970s technology to today’s technology, that creates new vulnerabilities unless you’re really careful. And so you need to be cautious about everything from the very chips you use to which technicians have access to parts of the facility. And, there, you know, there were specific vulnerabilities that were revealed around the US submarine force. I think these things have since been patched, but there are a lot of known unknowns out there.
Fin: I see. Am I right in thinking that there is, you know, at least a sense in which there are air gaps in place, at least, let’s say, for US command and control, and these vulnerabilities are kinda more sideways vulnerabilities rather than just these nuclear weapon systems being connected to the Internet.
Carl: Yeah. They’re not connected to the Internet, and the goal is for them not to be connected to the Internet.
Fin: For sure.
Carl: But there are still ways into air-gapped systems.
Fin: Yeah. This is a bit of an incidental comment, but I think there are interesting lessons here for when it comes to thinking about AI risks in general. Right? Yeah. Where you have these kind of obvious, somewhat naive solutions, like placing an air gap. But then you forget that technicians need to go and install things and you can maybe have some influence over which chips get inside these weapons.
Carl: Exactly. Unless you control your entire supply chain, your entire maintenance operation, there are going to be vulnerabilities. And in principle, you can do that, but it’s very expensive.
Fin: Yep.
Carl: And in the US government, contracts are awarded to the lowest bidder. And in other countries as well, there are opportunities to give a contract to the defense minister’s brother-in-law. Right? So Mhmm.
You may find that countries do not always take the most secure approach they can.
Fin: For sure.
Carl: If you were to design a command and control infrastructure from the ground up, where your only goal was that it would fail safe every time, that you would never have any false alarms or false launches, and that you would have maximum reliability, that system would look very different from the systems we have in place today. These are systems of systems built over time with considerations of cost in mind. They were built with considerations of interoperability because you have nuclear command and control that relies on components also used for conventional warfighting and conventional command. So we know there are failure points and risks within this system, and we tolerate them because it would be very costly to redo the whole system with maximum reliability in mind. I hope it never matters, but if there ever is an acute nuclear crisis, it’s likely to take place in the context of a conventional war in which systems are going to go down. They’re going to go down because adversaries will attack your communications infrastructure, your satellites, and your radar. They’ll be in your cyber systems. So in an intense conventional war, are we going to be able to rely 100% on the nuclear deterrent only being used when it’s supposed to be used? I hope the answer to that question is yes, but I don’t think anyone has confidence in that answer. Anyone who tells you they know for sure is speculating. I think in the case of the United States, this is a country that has spent billions of dollars to upgrade this infrastructure. Probably, it’s pretty strong infrastructure. I am especially concerned about China, Russia, or North Korea. If their command and control breaks down and they launch a nuclear weapon inadvertently or by a commander not authorized to do it, makes a mistake, jumps the gun, that is a problem for everyone alive on this planet. We are only as safe as the weakest link in that chain. I think the solution is not to get into crises with nuclear-armed states and to try to reduce our reliance on nuclear weapons in every case.
But we are headed towards an increasingly competitive world with US-Russia, US-China, India-Pakistan, US-North Korea. These are all nuclear dyads where there’s going to be a high level of competition and perhaps war. That’s something we should do everything in our power to ensure that if that conflict happens, it doesn’t go nuclear.
Near misses and points of failure
Fin: Okay, so for context for listeners, this is the first part of the second half of the recording, so we stopped and started again. If I’m remembering right, where we left off, we were talking about near misses and points of failure, and the various ways in which something can go wrong and lead to the unintended use of nuclear weapons. One pretty obvious case study in near misses is the Cuban Missile Crisis. So I figured we could kick off there.
Carl: Yeah, I think that’s a great place to start.
Fin: Super.
Carl: And I think what’s great about the Cuban Missile Crisis is that we have a lot of documentation on it. So we can go back and understand what the participants knew at the time, and then we also have more information that was learned later—things that the participants didn’t know. Because of that depth of analysis, you can actually see the different potential points of failure that occurred during that crisis, and that unfortunately are likely to replicate themselves if we have future crises.
Fin: Mhmm. Got it. And was it like there was some kind of buildup to a single moment of heightened risk? Or was it more like multiple different…
Carl: Yeah. So you had this buildup. In 1962, US imagery analysts discovered suspicious activity. They realized that the Soviets were deploying missiles to Cuba. Once those missiles were operational, they could hit much of the East Coast, including Washington, in a matter of minutes. So this is a crisis. The US was fortunate that it detected those missiles in time, and that it also had intelligence that they weren’t yet operational. So there was this very brief window to act. Kennedy gathered his advisers together. They called it the X-COM or the Executive Committee of the National Security Council. Basically, all of his advisers recommended a military response, either a preemptive military strike or a full-scale invasion of Cuba.
Kennedy was nervous that this would spiral into a nuclear war, and he actually didn’t take the advice of his military advisers and of the X-COM. Instead, he ordered a quarantine or an embargo of Cuba that would prevent further material from arriving at the island. Yep. And so this is where you often hear about the 13 days of the Cuban Missile Crisis.
This crisis played out over that time period. The Soviet ships approached that embargo line, and ultimately, they turned back. So at this moment of truth, Khrushchev blinked, as the narrative went, and the missiles were removed. We later learned, of course, that the US had struck a secret deal to remove the Jupiter missiles that were deployed in Turkey, and that from the Russian perspective had precipitated the whole crisis.
Fin: Sure.
Carl: And so that’s the story that we know for 40 years. But it’s not the whole story. After the fall of the Berlin Wall, the Soviet archives opened up, and historians scoured newly released documents. When I was at Carnegie Corporation, we had a grantee, the National Security Archive. They held an oral history workshop, in which they presented documents to people who were there in the room, and they captured firsthand recollections. Oh. And what we learned was that the crisis was actually worse than we had thought. During the time of the crisis, Kennedy had said that the risk of a nuclear war was somewhere between 1 in 3 and even.
Fin: I think it’s important. That he’d said that?
Carl: We have tapes of the whole thing.
Fin: This is important.
Carl: The White House conversations were recorded. And because it’s 60 years in the past, those were released. You also have various accounts, like the account of Robert Kennedy, which was very much a self-serving account of what transpired in those 13 days.
That painted him and his brothers as the voice of reason during the crisis. We know that’s not entirely the case. Actually, it was Adlai Stevenson who was one of the key voices of reason throughout this. We have a lot of information about what happened, including from the Soviet archives. One of the things we learned from the Soviet archives is that they actually had nuclear weapons on the ground that were active and ready to be used. You’ll recall the crisis was precipitated by the longer-range nuclear missiles that could have hit the United States. Those were not yet operational, but the local tactical short-range nuclear weapons were on the ground. They were operational. They were in the hands of the Soviet commanders on the ground and would have been used against a US invading force if Kennedy had authorized that invasion as had been recommended by his security advisers.
Fin: Okay. So that is one way the stakes were, I guess, higher than we had thought, in light of what we learned decades after the Cuban Missile Crisis.
Carl: And do you know about the submarine?
Fin: I was actually going to ask you about that. Please, yeah, please tell me.
Carl: Okay. So this is wild. You have this Soviet fleet that’s trying to break the embargo. Accompanying them is a diesel-powered submarine. Several of them, right? Each is armed with a nuclear-tipped torpedo.
This is not known at the time. As they approach the blockade, the US destroyers start dropping depth charges to force them to the surface to enforce this quarantine.
Fin: Got it. And they did not know that the nuclear-tipped torpedoes were on board when they were dropping depth charges.
Carl: Exactly. Yeah. These submarines were cut off from all surface communications, so they had no idea what was happening above them. The captain of the submarine believes that a war might have already started, so he actually orders the use of the nuclear torpedo.
Fin: And he believes that a war might have already started because, I guess, the only thing he’s seeing is a bunch of depth charges around him. No other information. Kind of reasonable to presume. Yeah.
Carl: Yeah. Normally, there are two officers on a Soviet submarine. Both need to agree to launch the nuclear torpedo before nuclear weapons are used.
He has to get authorization from the political officer, who agrees to launch the torpedoes. The reason the torpedoes weren’t launched is something of a historical fluke, which is that the Commodore of the fleet, Vasili Arkhipov, happens to be on board this submarine. Because he’s the Commodore, he outranks him, so the captain needs to get approval from Vasili Arkhipov. Vasili Arkhipov does not approve the launch, says we should wait. That’s why those nuclear-tipped torpedoes were not used in the conflict.
Fin: Wow. So to say that back, Savitsky, the captain of the submarine. Mhmm.
He wanted to use these nuclear torpedoes, would have made that order. Ordinarily, that would have been enough. For that order to be triggered by some fluke, it turned out that in this case, he needed to seek approval from his superior, who was also by chance on that submarine. This person is Vasily Arkhipov. He decided to contradict the order and not to fire the torpedoes. So, okay, a lot of questions. Maybe one is, in that event where the torpedoes would have been fired, maybe it’s an obvious question, but what could have happened?
Carl: Well, they would have taken out one or more of the US destroyers in the first use of nuclear weapons since 1945.
We don’t know what would have happened next. Thousands of US service members might have lost their lives, and there would be incredible pressure to retaliate. So it’s not clear what would have happened next. It might have escalated, or it might not have.
Fin: For sure. But I guess it’s reasonable to assume that, given tensions were so high, there was a not insignificant chance of escalation.
Carl: That’s right. And I think you need to be careful with any of these near-miss stories, right? Because after the fact, people might have an incentive to make it seem like they came closer than they did or that they didn’t come as close as they did. There are political pressures that could cut both ways. And hindsight is always 20/20. So we should be skeptical of all of these stories. The ones that are more credible to me are those backed by evidence. And the archival evidence is pretty strong on some of these points. I’m actually not sure of the strength of the archival evidence on the Arkhipov story because I think it relies on the actors recounting this story.
Fin: Do we know any other near-miss stories or potential near-miss stories from the Cuban Missile Crisis?
Carl: Yeah. So we know that an invasion would have been pretty devastating and would likely have resulted in the use of nuclear weapons.
But it’s interesting that this supposedly safer option, this quarantine, also had this risk related to the submarine. And there were other risks too during these 13 days. Okay. So we know, for example, at the height of the crisis, an American U-2 surveillance plane piloted by Rudolf Anderson Jr. was shot down.
Fin: Oh, okay.
Carl: After that happened, Kennedy’s military advisers called for airstrikes against Cuba’s air defenses the following morning.
And this had been a line that they had drawn. They had said, don’t shoot down our planes or we’re going to retaliate. The president correctly suspected that Khrushchev had not authorized the shoot down of the plane. And so he continued to push for diplomacy. This was an unauthorized use of anti-aircraft weapons that could have triggered an invasion and could have triggered nuclear war.
Fin: Okay. I actually know much less about this incident. Yeah. It sounds like there’s a lot of asymmetries to the Arkhipov story with the roles switched. Were Kennedy’s advisors pushing for retaliation? Was Kennedy a kind of lone voice against that?
Carl: So they had drawn this line. They had drawn a red line.
And the line was crossed. So the default would have been to retaliate. A lot of Kennedy’s advisors at this point actually wanted to go in and invade Cuba and found the Soviet actions there to be a useful pretense. You’ll recall a few months before, the US had attempted the ill-fated, ill-conceived Bay of Pigs operation, in which the US supported Cuban dissidents and exiles to try to kick Castro out.
It failed miserably. It was humiliating. But for many of the security folks in the room, this was an opportunity to finish the job. So they were looking for any provocation to go in.
Fin: Yeah.
Carl: A lot of people at the time felt like the Soviet nuclear weapons were not operational, and there would be no retaliation. Basically, we would be calling the Soviet bluff, and there would be no consequences in doing so. If that rings a bell, it’s because we’ve seen this dynamic play out in other crises, including the recent crisis in Ukraine, in which Putin has made a nuclear threat. He said, I’m not bluffing. If you cross certain lines, we will use nuclear weapons. And a lot of people are saying, well, that’s just bluffing because in the end, he wouldn’t dare to cross that line. The consequences would be too great.
Fin: I see.
Carl: This is the nature of nuclear brinksmanship. There’s a lot of uncertainty. So you can see the patterns are similar across many of these nuclear crises, and you have people on the ground operating with imperfect information.
Fin: Yep.
Carl: You have leaders operating under great psychological stress, under time pressure, under political pressure, operating on the advice of advisers who don’t agree with each other, and all of this is taking place in this fog of war or this fog of conflict.
Fin: For sure. Yeah. I was thinking about what lessons you might draw from those near-miss stories during the Cuban Missile Crisis. Maybe one pattern which could carry over to the present moment and future cases is, like you said, at some high level, you have this kind of brinksmanship between major players. You’re kind of calling one another’s bluff, ratcheting up various kinds of threats and tension along with it. Ostensibly, these are calculated moves on some big game board. Then, below that high level, on the ground, you have all this messiness and chaos. You have incidents where people just don’t have all the information they need. In the heat of the moment, there can be some provocation, which maybe gets misinterpreted. Rash decisions are made. So you have these two levels: one is the high-level discussions, and then
On the other hand, you have all of this chaos. They’re almost like kindling for that tension to erupt into something worse.
Carl: I think that’s absolutely right.
If you have uncertainty at the very top levels of decision making, and you have uncertainty on the ground level where local commanders are taking decisions based on what they believe is the right thing to do based on their orders.
Both of those things can go wrong.
Fin: Right. On one level, you could assume that no one’s making entirely rushed decisions. Everyone’s got roughly enough information. Even then, there are risks from this kind of high-level brinksmanship. And then you add in this extra level of, on the ground, people just don’t often know what’s going on.
Carl: Exactly. And there’s a third layer of risk, which involves technological risk.
Fin: Yeah.
Carl: That is less at play within the Cuban Missile Crisis, although you see elements of it in terms of inaccurate intelligence. But there is the possibility that systems go wrong.
Fin: Do you want to get maybe 1 or 2 concrete examples of these kinds of systems failures?
Carl: Yeah. A great book to read on this is Eric Schlosser’s Command and Control, and he does a really exhaustive job in documenting and telling really good stories about many of these. I’m not going to do them justice, but I would be happy to provide a link for your listeners.
Fin: Super. I’ll include that.
Carl: That’s great. One of the incidents that’s most discussed is in 1983, the Stanislav Petrov incident, in which a newly deployed Soviet early warning system malfunctions and indicates that a nuclear war is underway. The colonel who is manning the system decides not to pass that warning up to his superiors, where, presumably, it would have been acted upon. Because he believes he understands the system is providing a flawed readout.
Fin: And that person is Stanislav Petrov?
Carl: Stanislav Petrov. It’s always hard to know with these stories how close we really were. That’s the caveat I would add here. But what I can tell you is that the systems and structures we have in place create opportunities for failure. And if we’re going to keep running this back over and over again, if we’re going to have more crises in the nuclear shadow, eventually, we’re gonna get it wrong.
I feel pretty confident that we can’t just keep rolling the dice and expect to always avoid nuclear war.
Fin: One thing I’m picking up is that, in the examples, at least the examples we’ve talked about, so Arkhipov and Petrov and this U2 spy plane being shot down. In each of these cases, we avoided the worst outcome. But not because there was some system in place to make sure that these kinds of small risks don’t percolate up to the very worst outcomes. Instead, just because the right people happened to be there at the right point in time and just used human judgment.
Carl: Yeah. We should not be relying upon that. We need systems that fail safe all the time.
And the risk of miscalculation or technical error, the fact that we could create a civilization-threatening catastrophe. It’s just so foolish. And if we survive this nuclear age, we’ll look back on this and say, what were we thinking?
Fin: Yep. For sure. I mean, it’s easy to look back on something like the Cuban Missile Crisis and think that that is crazy if you hadn’t lived through it or indeed the Cold War. And then, I guess there’s some reminder that fundamentally the threat hasn’t gone away. The level of risk might go up and down, but the fundamental cause is still here.
Carl: Yeah. And I think that one point I wanna make is that nuclear risk during peacetime is very different than nuclear risk in war or in conflict. And I think the chances that we will accidentally have nuclear use out of the blue are very, very low. But if NATO and Russia ever went to war, if the US and China ever went to war, during the very first stages of a conventional conflict, both sides would be seeking to blind and confuse the adversary.
They would be taking out radars, communication systems, and satellites in all likelihood. And our situational awareness depends on those factors.
Moreover, there’d be incredible stress on leaders. Right? So if the US were at war with China and all of a sudden China starts sinking aircraft carriers with thousands of sailors on board, the pressures for escalation are going to be intense. And it’s fine to devise systems in the abstract that can withstand the pressure of a crisis. But when you’re actually in that crisis, everything changes. And we’ve seen so many cases over the history of war in which one side or another blundered…
Because of a psychological failure, because of failure to understand the situation, because of technological failure. You think about just in Ukraine. Right? Russia made a huge miscalculation in prosecuting that war the way that they did. And that’s why a lot of people thought that Russia would never go in—because it’s foolish in many ways, but Putin did it anyway. And you could say the same thing about crossing the threshold to nuclear use. You could look at it and say, well, obviously, no one would ever come out a winner in a nuclear war. So why would you fight one? But leaders make mistakes all the time. And there are certain psychological tendencies that we’ve observed again and again in human behavior. One of them is gambling for redemption. Right? So prospect theory…
Kahneman and Tversky. People value losses more than gains. And so if you get involved in a conflict that you’re losing, you might take a shot to try to redeem yourself. Double down. Escalate. Double down. Exactly. And that is one circumstance in which I could imagine nuclear weapons being used.
Fin: That’s a nice point. I guess that’s related to this idea of sunk cost fallacy. Maybe you feel like you’ve invested so much that you’ve got to keep spending until you get a payoff.
Carl: Yeah.
So I think one of the big things if you, you know, if you want to avoid nuclear war, the number one thing you can do is avoid war in general. Avoid a great power conflict with high stakes.
And that means finding ways to reach accommodations with countries that you don’t agree with on all kinds of things. Right? If you find yourself in that war, you better hope that you have well-designed systems and safeguards, so that your nuclear systems fail safe 100% of the time.
Fin: For sure. So we were talking about points of failure. We were discussing how they can be failures of systems, they can also be failures of human decision-making at pretty much any level. I figured maybe we could go through some other examples from history, which aren’t so much examples of near misses or potential points of failure, but are more just like examples of contingency in the history of nuclear weapons. So maybe a point we can start with is to, I guess, go back in time a couple years before the Cuban Missile Crisis. There was this discussion about what’s now called the, if I’m pronouncing it right, Akesson Lilienthal report. Yeah. Which then became the Baruch Plan. I actually don’t know much about this at all, but it sounds like a big deal. So could you take me through what was going on there?
Carl: Yeah. So this is not an area of expertise for me. Sure. But it was an attempt after World War II to manage the threat posed by nuclear weapons.
And at the time, people felt that we had crossed some threshold. Even though the nuclear weapons at that time were relatively rudimentary compared to those we have today, they had unleashed this new order of magnitude and destructiveness, and people could see where this story would lead if we didn’t get a hold of it. So there were pretty serious talks from the start between the US and the Soviet Union about how to limit the spread of nuclear weapons. There was talk that the US would put its nuclear weapons under international control
Fin: Mhmm.
Carl: And the Soviets would avoid developing weapons of their own and that this could head off what people saw as a likely nuclear arms race.
Fin: What does it mean to put your weapons under international control?
Carl: Yeah. Well, that was the problem. Right? Is that, you know, you have the United Nations was just established. It had very little capacity, and neither side trusted each other. They had been allies of necessity during World War II, but
It was clear that they were going to be staring across the lines of this iron curtain for many years to come. And so you would need an incredibly intrusive and monitoring system in order to ensure that the weapons didn’t spread, and ensure that the technology didn’t spread, and that people didn’t have secret programs. It was probably a doomed effort because we know that already the Soviets had espionage in place, and just the existence proof that nuclear weapons could be developed
Was enough to generate a lot of interest in them.
And the crash Soviet program succeeded relatively soon after, to the surprise of the United States and its other allies. So, yeah.
Carl: But one interesting thought experiment we might run is, you know, what if nuclear weapons were developed at a different time? Not in the wake of World War II, but in an era where you have greater transparency, monitoring, and verification.
Fin: Uh-huh.
Carl: Satellite imagery and, you know, a full suite of scientific capabilities that just didn’t exist at the time.
Fin: Mhmm. And this is because, I guess, there was a limit on how easy it was to verify that, I guess, the US in this case was going to stick to this very bold plan to hand over control of their nuclear weapons. If it were easier, then maybe it would be more feasible.
Carl: Yeah. The Soviets never believed the US would carry through with its pledge, and the US never believed that the Soviets would actually refrain from developing nuclear weapons.
So there was an incredible trust hurdle to overcome.
Fin: It’s so interesting though that that is a hurdle which, in principle, could be overcome with technology, right? If you could just show it, it’s like a demonstrably effective way of verifying these kinds of agreements. That makes the agreements easy to stick to.
Carl: Exactly. And it’s not like there were a lot of facilities around at that time. And there was no civilian nuclear energy. That’s one of the big risk factors as we know, and that wasn’t even in place at that point.
Fin: Mhmm. Yeah. It’s like a fascinating, as far as I can tell, kind of moment of contingency, as long as there was any chance that it could’ve happened. Where, you know, if this plan were successful, then you would have, I guess, kind of unique as far as I can tell. This case where the United Nations, this kind of representative of the entire world, has some kind of overall control of this, you know, hugely destructive technology. And like you mentioned, probably was doomed. But you could imagine tweaking some historical variables where it was a bit more feasible and could have happened?
Carl: Yeah. Yeah. We tend to look at the world as it is and assume that it could not be any other way. But when you look at the history of the nuclear age, there are all these moments of contingency in which you see the possibility of another path not taken, sometimes for the better and sometimes for the worse. So I’ll give you one example. We know Nazi Germany was the first country in April 1939 to start work on nuclear weapons.
Now to put this in context, that meant that they had a three-year head start on the Manhattan Project. And the Manhattan Project only took three years to complete. So, ultimately, the Nazis failed because they chose a particularly difficult approach to build the weapons. And because of the war, which started just a few months later, sucked up all the necessary resources for this nuclear weapons project. But I don’t see that failure as inevitable. Now, for example, had they chosen a different path, and had they encountered early success, they might have started to get more resources. Or if Germany had not invaded Poland until a bit later, in this case, they might have been in sole possession of nuclear weapons as well as the V-2 rocket. Yep.
So how might that have changed the outcome of the war? And another thought is, how might that have changed the way that we view nuclear weapons after the conclusion of the war?
So, hey, it’s this historical irony that the Nazis started humanity’s pursuit of nuclear weapons, but it was ultimately the use of nuclear weapons by a democratic power that was viewed as ending the war. So nuclear weapons emerged on the world stage as part of this arsenal of democracy rather than as the project of a fascist state with a death wish.
Fin: Yeah. Wow.
Carl: And if that story had unfolded differently, I think we’d think about these devices very differently.
Fin: Yeah. It’s a really great example. Can you say anything more about, as far as we can tell, why the Manhattan Project succeeded so quickly, like it did? Was it just a case of luck in choosing the direction?
Carl: Yeah. A lot has been written on this. And I think they had a really incredible team in place with just the top physicists of the world, as well as Leslie Groves, who was a general and could manage the operations really well.
And also, the US was not distracted in the same way as Nazi Germany would have been in its pursuit of the war. The US also benefited tremendously from the exodus of Jewish and other scientists who were fleeing the war. And so that gave the US a huge advantage, and tremendous resources flowed into it. It’s still absolutely staggering to me that they were able to keep this project a secret for 3 years.
Fin: That is extraordinary.
Carl: Yeah. You can’t keep anything secret these days, it seems.
Fin: Yeah. That is wild. I hadn’t really thought about that. I mean, when we’re thinking about contingency, you know, you can imagine tweaking different dials on history, right? And think how things go differently, and you can appreciate how close we are to some very different worlds. One dial I was thinking that you could tweak is a question like, well, how easy is it to develop nuclear weapons in the first place? And it turns out that we’re in a world where it was a fairly tricky technological nut to crack. And even when you crack it, you need quite a lot of infrastructure, like access to fissile material or to manufacture it, and so on. I don’t know. You could imagine it being a little bit easier to develop nuclear weapons.
Carl: Yeah. I mean, there are the laws of physics that bound this question, right? But then there are all kinds of engineering questions that could make the development of nuclear weapons harder or easier. And one of them, for example, is the way that most countries in the past 30 years have sought to develop nuclear weapons is to use the commercial technologies that are used for making nuclear fuel to develop fissile material for the bomb. And because these are commercial technologies, because these are centrifuges, the centrifuge plans that AQ Khan was distributing were from the Dutch Urenco facility, or the international consortium Urenco facility located in the Netherlands. That was a really sophisticated approach because it was built with sensitivities to cost.
You wanted to produce nuclear fuel at the lowest cost. So the centrifuges there are very sophisticated. They spin really fast. There are very low tolerances for any kind of error. Right? Now, there are other ways to enrich uranium. There are gaseous diffusion plants, and there are enrichment facilities that use shorter centrifuges that don’t have to spin as fast. If the AQ Khan network had been spreading around those plans instead of the more sophisticated Yorenko design…
Fin: Okay. I see.
Carl: It’s quite possible that more countries would have had success in getting their hands on nuclear weapons.
Fin: Does that make sense? That does make sense. Absolutely. Yeah. I guess I was imagining worlds where it’s easier to develop nuclear weapons. And I guess there’s a sense in which, well, I mean, that question is kind of determined by physics. So it’s a bit weird to imagine changing physics. It is a bit of a silly question. But you can imagine this in the sense of, what about future destructive technologies? Which could be more accessible or easier to develop and…
Carl: Absolutely.
Fin: Then there’s a sense in which the development of nuclear weapons is a kind of story we might draw lessons from for when we do this again with a difficulty zone turned up to 11.
Carl: Yeah. I think if you look at the first 77 years of the nuclear age, they provide a case study in a particular type of technology that poses an existential or catastrophic risk to humanity.
And the future doesn’t need to follow that pattern. If you look at biotechnology, for example, we can’t draw analogies from the nuclear age in terms of trying to control the types of engineered pathogens that might be hugely dangerous to humanity.
In the next 100 years, nuclear materials may become much easier to develop as we have new types of advanced manufacturing. If it’s possible to 3D print centrifuges that work perfectly off some set of blueprints, you could imagine any country with access to raw uranium being able to build a bomb. Mhmm. I don’t know if that’s the world we’re going to live in.
Fin: It’s interesting that people talk about biotechnology and occasionally risky kinds of AI. They draw analogies to nuclear weaponry, and they say things like, well, imagine, you know, nukes, except you can build it in your garage in a couple decades’ time. But it sounds like you’re saying, well, imagine nukes, but easier to develop. That’s also a…
Carl: It’s one possible future. It’s one possible future. And I think that one of the biggest obstacles to developing any of these technologies is the human capital, having people who understand the mechanics—in the case of nuclear, the nuclear physics of it. In the case of bio, the life sciences. It’s really hard to do because you’ve got to put together a team with the right competencies. Artificial intelligence is going to make all kinds of things easier in the next 20 years, including things that ought not to be so easy.
Carl: And so if you are able to have your scientists and technicians supported by AI systems, you might need a much smaller team, a team with fewer competencies, and you might be able to do it more secretly.
Fin: That’s an interesting point. I guess the framing there, to say it back, is at least before we develop this kind of transformative AI, the fact that these destructive weapons technologies require a lot of human capital is a kind of fortuitous fact. Because it smooths out the speed at which you can possibly develop them. If we have AI, which can substitute for that kind of human capital, then that kind of slow development doesn’t look like it’s a necessary limit anymore. Or at least that looks like a possibility.
Carl: That’s right. The key bottlenecks for nuclear are the fissile material
Fin: Mhmm.
Carl: And the technical expertise.
Fin: Mhmm.
Carl: And anything that reduces those bottlenecks creates pathways to developing bombs even by smaller countries and potentially non-state actors.
Fin: Yep. And since we’re talking about these lessons from history, here’s a really obvious one that occurs to me. I’m curious what you think about it. I guess there are at least some cases, especially when we’re thinking about international agreements like this, and the Lilienthal plan, where you basically just need to be early in the development of the weapons technology to have any shot at some kind of agreement. Right? So that plan was drawn up at a time where one actor in the world had developed this technology. It fell through, but that was the only chance they really had for that kind of plan to work. And then the Non-Proliferation Treaty.
Carl: One of the key principles here is that it’s much easier to refrain from an activity you’re not yet doing than it is to stop doing something you’re already doing. Once you’ve invested a lot of time, money, effort, and in some cases, political capital in making something happen, it’s hard to stop doing that thing. That’s why we should be really careful about crossing into new technologies that might be risky because it’s harder to walk them back. It’s better if you can exercise restraint beforehand.
Fin: That’s an excellent point. And it sounds to me like a point which doesn’t just apply to nuclear weapons.
Carl: Absolutely.
Fin: Okay. So we’re talking about some examples from history. I’m kind of curious to cover a couple more, if that’s okay.
Carl: Yeah. Right.
Fin: So very quickly, a while ago you mentioned that nuclear weapons, the development of nuclear weapons represented this kind of access to a new order of magnitude of destructive capability. And I guess part of the nuclear story is that, let’s say, 6 or 7 years later, that happens again, which is when we develop the hydrogen bomb. I don’t really understand what that means for thinking about nuclear risk. So I was wondering if you could say something about what that extra step looks like and what it means for us.
Carl: The development of the hydrogen bomb represents the crossing of another threshold.
Carl: Because for the initial relatively simple nuclear devices that were built in 1945, there are pretty strict limits on the explosiveness you can get. It’s limited by the amount of fissile material you can acquire, and it’s a really difficult engineering challenge to get that beyond a certain level. What happens in 1952 is a new type of nuclear weapon called the super or the hydrogen bomb, which uses the initial fission reaction of a primary nuclear warhead to ignite a much larger secondary fusion reaction.
Fin: Okay.
Carl: And this is a case of really bringing the power of the sun down to earth. With practice engineering, they realized that they could essentially scale this up indefinitely. There were no longer limits on how large you could make a nuclear device, right? So if you think about the device that was used against Hiroshima, that was 14 kilotons, 14,000 tons of TNT. It was, by some accounts, more than all of the conventional weapons used in World War 2 up until that point. It was an incredibly large detonation. But a few years later, they were able to build warheads that were a 1,000 times more powerful than that.
Fin: Wow. Yeah.
Carl: There are weapons in the US arsenal today that are 100 times roughly more powerful than the bomb used in Hiroshima.
Fin: Just this is maybe a bit of a side note, but I’m assuming the hydrogen bomb was tested in 1952, as well as built. And
Carl: Yes. Yes.
Fin: If so, was that the first time that humans had triggered a fusion reaction?
Carl: To my knowledge, yes.
Fin: So that’s also just like a really significant moment in human history, right? And also just the history of Earth. Like I’m assuming that’s the first time a fusion reaction was triggered on Earth in this entire history. It’s a real kind of like playing God moment. If that needed to feel any more kind of serious.
Carl: It’s amazing how many of these fundamental discoveries have been achieved in the context of waging war or preparing for war.
The 1986 Reykjavik summit
Fin: Okay. So since we were doing some kind of whistle-stop tour of, let’s say, contingent moments in the history of nuclear weapons, let’s jump ahead, I don’t know, 3 decades or so, to this summit in Reykjavik, which I actually only learned about quite recently. So this is Reagan and Gorbachev meeting to discuss arms control. Can you, yeah, take us through what they were aiming to talk about and what eventually came out of those discussions?
Carl: Reykjavik is so interesting because you have these two characters, Reagan and Gorbachev, meeting in a very simple and direct setting. I’ve been to this Hofty House in Reykjavik
Fin: Oh, wow.
Carl: And it’s not very ornate or elaborate. They occupied this same house together. In fact, there were so few meeting rooms that there was one of Reagan’s conferences with his advisers that took place in a bathroom because that’s the only place they felt they could meet privately.
Fin: Wow.
Carl: So Reagan walked in, and all of his advisers were in the shower. He looks over, goes, sits on the toilet, and says, “I’ll take the throne.”
And so they had 3 days of negotiations face to face, really coming off the height of the Cold War, to try to figure out if they could reach an accommodation.
You have to remember, the story of Gorbachev is not what we know yet. He was relatively unknown, and people didn’t know whether he would be a reformer or a hardliner in sheep’s clothing. He and Reagan had this tremendous rapport, actually. They saw eye to eye on so many things. Over the course of a few days, they were able to reach agreement on a few things. First, that a nuclear war could not be won and must never be fought.
A really important insight. Reagan, it appears, really took the threat of nuclear war seriously. It troubled him.
Fin: And you can read his diary, right? Yeah. You get this impression that it’s a real, personal issue that’s weighing on him.
Carl: Yeah. To the extent even that some of his advisers started to worry, would he be tough enough if a crisis were actually to come? There were a few movies in particular that affected him, including “The Day After.” He wanted to get rid of all nuclear weapons. He saw them fundamentally as illegitimate. The analogy he said on several occasions was that if there were an alien invasion, all of humanity would come together to unite and defeat these invaders, right? It was a strange thing to say for many of his advisors. They asked, “Why are you saying this?” They tried to take it out of his speech, and he put it back in again. He again raises this point with Gorbachev in Reykjavik and says, “Why don’t we just get rid of them all?” Gorbachev basically agrees but says, “If we do this, you need to get rid of your missile defense program.”
Fin: Mhmm. And this is the Star Wars program?
Carl: This is a program that doesn’t work, that even the people who are working on it don’t believe is likely to work, and yet it has become something of an article of faith for Reagan and his supporters. There’s this negotiation that ensues, in which Gorbachev says, “Okay, we’ll allow you to keep these weapons, but they need to be confined to the laboratory,” whatever that means.
Fin: Yep.
Carl: In the end, the US side does not agree to this, and Reykjavik falls apart. The summit ends, and they had this opportunity to agree to a joint elimination of nuclear weapons that both of them agreed to in principle, but it fell apart on some of these particulars.
Fin: I remember reading about this, like I said, the first time only a few weeks ago. It’s so disappointing to read because it feels like we really did come very close to the agreement. It was not as if, for instance, as far as I could tell, that Gorbachev would never have agreed to this kind of total deproliferation. And this whole defense program detail was like a bluff.
I got the impression that Gorbachev, like Reagan, really cared and agreed that nuclear conflicts could not be won and was entirely open to this idea. And it’s probably just like genuine contingencies in this. Yeah.
Carl: I think if it was up to just those two men, it would have happened.
Fin: Mhmm.
Carl: And it’s hard to say what would have happened next because then you would have brought in all of the military and scientific advisers to determine, okay, how do we actually do this thing? Which is really hard to roll back in a verifiable way and eliminate these weapons, right?
It’s not an easy task even if you have agreement on it. But it’s one of those moments where even if they hadn’t been able to eliminate nuclear weapons, an agreement to move in that direction could have pushed us so much further away from the brink.
Fin: Sure. Yeah. What a moment in history. Okay. So for context, we’re now in part 3 of the recording, in case we sound a little different. And if I remember, we were going through some moments of contingency in the history of nuclear weapons. And I guess we’re bouncing around a bit. It’s a bit non-chronological. But there was one more example I wanted to ask about, which is this story about how after Hiroshima, there was a journalist called, I think, John Hersey, who wrote about Hiroshima. Can you say something about that?
Carl: Absolutely. This is a really interesting story, and there’s a good book about it. It came out in 2020 by Leslie Blum, and it’s called “Fallout.” Maybe you could link to that at some point.
Fin: Sure. Sure.
Carl: It’s just this remarkable story of how one reporter, John Hersey, revealed this government cover-up about the full consequences of the Hiroshima bombing. Now, Hersey had been an award-winning war correspondent, and he was writing for the New Yorker. He went to Hiroshima after the bomb had been dropped. This was a time when Japan was occupied by the US. It was administered by General Douglas MacArthur, and all the reporting from the city was tightly controlled. But Hersey wanted to get out and find out what had really happened. So he actually faked a stomach illness to sneak out away from his minders
Fin: Okay.
Carl: And to talk to the survivors. He put together this incredible reporting, telling the human stories behind the bombing. It ended up being this 40,000-word exposé in the New Yorker. This was a time when the New Yorker wasn’t known for its serious publication. It was seen as a humor and culture magazine. This is a wild story of how they kept it under wraps. The editors prepared a dummy issue as a decoy to avoid people learning about the issue before it actually appeared on newsstands.
Fin: And what does that mean? So, like
Carl: They had a whole separate issue of The New Yorker with articles, cartoons, and everything else that the rest of the staff at The New Yorker thought was going to be released. Instead, one day, it showed up on the newsstand, and it was like nothing that had ever been published before in journalism because it was just one story.
Fin: Wow. No cartoons. Nothing else. 40,000 words.
Fin: And he was in on this at the New Yorker.
Carl: Yeah. So it was the lead editor. I think his name was John Ross. And then William Shawn was the editor who was working directly with Hersey. Okay. And I’m sure a few other people as well.
Fin: Yeah. Sounds a very…
Carl: Small number. Courageous move. Yeah. Exactly. And so why is this important? Well, the story sent shockwaves because it showed that the previous stories about Hiroshima weren’t telling the whole story.
So contrary to government figures, over 100,000 people had died. And perhaps most significantly, it revealed for the first time the nature of radiation sickness. The previous authorized versions of the story had focused on the size of the blast, but not on the human consequences or the way that nuclear weapons are different from conventional weapons.
And one of the ways is radiation sickness. People suffer, and it has this long tail of consequences on people when there’s really horrific stuff. And this caused public opinion about nuclear weapons to start to change. And the story had to be approved by the US military.
Fin: Oh, wow.
Carl: And it was actually Leslie Groves, who was the general who oversaw the Manhattan Project, who approved the story with a couple of minor changes, in part because he understood the risk that nuclear weapons might someday be used against Americans on US soil. He wanted this story to get out to show that these weapons are not just large explosives, that there is something fundamentally different about them.
Fin: Mhmm. But it sounds like there are at least some figures in the government or military who preferred this story to be covered up.
Carl: Yes.
Fin: Maybe it’s a naive question, but I’m curious about why exactly.
Carl: Well, I think they wanted the weapons to be usable. And this is a time where already the Cold War was ramping up, and the US had the monopoly on these weapons. They wanted to tell a story of a heroic victory in which US military might and science had managed to defeat this adversary. The inconvenient parts of that story, such as the level of suffering by civilians in Hiroshima and Nagasaki, were not part of that story. They wanted to make sure that the US would not be self-deterred from using these weapons again, in the case of conflict with the Soviet Union.
Fin: Got it. I didn’t know that Leslie Groves signed off on the story. It’s quite interesting that maybe one of the people closest to its development. Yeah.
Carl: And I think it’s one of these incredible stories of how journalism can change the world. Right? And you think about an alternate world in which Hersey didn’t go out and do that reporting, and in which Ross and Shawn didn’t decide to green light that story, and in which Groves didn’t sign off on the copy of the story that was to be published.
It’s a world in which we don’t know about radiation sickness, at least not right away. Maybe, I think, eventually, we would have learned something about it. We wouldn’t have learned about the human toll of these weapons.
And you wonder how the arms race might have played out differently under those circumstances.
Fin: Absolutely. It seems overwhelmingly likely that we would learn about the true story eventually. But it does seem to me like there’s something distinctively important about writing a thoroughly researched, widely read, almost canonical retelling of what happens that basically everyone can trust, and doing that shortly after the events.
Carl: Yeah, I think that’s absolutely right.
Fin: So we’re on this kind of whistle-stop history of events in the history of nuclear weapons. Let’s race forward then. I think we should start approaching the present day, and then we could start talking about the future. Something I just saw, I can’t remember where, but this event in the early nineties where Bush removed tactical nukes from something—I hadn’t heard of that before. I wonder if you could say more about that.
Carl: Yeah. So in 1991, George H. W. Bush, Bush the senior, decides after the fall of the Soviet Union that we don’t need to have as many tactical nuclear weapons, and he pulls them basically overnight through an executive order off of all the US surface ships. And this is the single biggest reduction in deployed nuclear warheads. It’s done unilaterally. It’s done with very little external input. And this is another case of a single individual and his staff making a very consequential decision. And one that I think has left us safer. Happy to go into that more if you’d like.
Fin: Yeah. I mean, maybe just one question is, what exactly are tactical nukes? I have a vague picture, but…
Carl: Ah, that’s a big question. And we could talk about that for hours. There’s no straightforward answer. But tactical nuclear weapons or non-strategic nuclear weapons usually refer to systems that are designed to be used on a battlefield in military situations. They’re defined in current treaties based on their range rather than by their yield. They do tend to be smaller yield, but not necessarily. So some tactical nuclear weapons are very small by comparison to other nuclear weapons. For example, 300 tons of TNT. Now that’s still a pretty big explosive. You think about the bomb that was dropped on Afghanistan. It was called the MOAB, the Mother of All Bombs. Biggest conventional explosion ever. That was about 11 tons of TNT. So we’re still talking about something that’s about 30 times larger than that. But small compared to other nuclear weapons. And then some tactical weapons are very large. And so you might have tactical nuclear weapons that go up to 300 kilotons of TNT. It’s 300 kilotons, so basically 20 times the size of the Hiroshima bomb.
Fin: Okay. That is confusing. I was really hoping it would just be a really easy, like, is a difference in size.
Carl: No.
Fin: Okay. On TNT equivalent—I mean, yeah. Maybe I can link it if I can find it on the internet somehow. But I stumbled on this article about trying to figure out, trying to get some intuition for what a ton of TNT actually means. And then what, you know, a kiloton of TNT equivalent actually means.
Carl: Just because, you know, I hear that phrase, and I can’t really figure out how to map it onto anything in normal experience. I found that really useful. And then also the article at the end, I think, tried to roughly figure out this was when nuclear stockpiles were larger than they are now. But still, I was trying to figure out how many tons of TNT equivalent explosive force was there in all the world’s nuclear stockpiles per person. And it’s like a large conventional bomb, you know, that could destroy a small city or something like that, per person in the world. And it really put things in perspective. I mean, maybe I can, yeah, try to look
Carl: at that. Yeah. That sounds like an explainer someone should write if it hasn’t been done yet.
Fin: Yeah. For sure. I mean, when you just get into these kinds of orders of magnitude, like, you’ve already got this destructive thing and then you’re multiplying it by a thousand and then a hundred again. It’s just like, I’m losing track of what that actually means. Yeah. But okay. So we
Carl: get, yeah. Go. I think it makes sense to go back to why these tactical nuclear weapons exist. And most of them were designed and developed at a time when weapons were much less accurate. Right. So if you wanted to have a chance to really destroy your target, you might need a nuclear weapon to do that. And what we started to see, especially demonstrated in the early 1990s, was that precision-guided conventional weapons can take out targets that had previously only been thought to be vulnerable to nuclear weapons. And that was something of a revolution in the way we understand military power.
Fin: This is maybe a bit weird or cynical to say. But is there a sense in which that is a kind of welcome development? Because I guess, the more precise you can be with your weapons, the less collateral damage you’re causing
Carl: That’s right.
Fin: Non-strategic targets.
Carl: Advocates of these programs argue that. They argue that greater precision allows you to wield a scalpel instead of a hatchet and to be able to take out military targets without harming civilians. In principle, this is true, but we also see that highly accurate weapons can be used to cause incredible harm.
Fin: Yeah, I mean, I guess if you have a scalpel, you’re gonna use it more than a hatchet, right? So it’s an exact thing.
Carl: Exactly, exactly. And also, you can see Russia, for example, using precise strikes to hit hospitals, to take out electrical grids, to really impose misery and horror upon the people of Ukraine.
Fin: Mhmm. Yeah. It’s, seems not at all clear, but we should think about that development. Okay. So raising your heads to very near to the present day now. Yeah. I wanna talk about the treaty on the prohibition of nuclear weapons. It was in 2017. How is that different from the other treaties that we’ve talked about? So the nonproliferation treaty in particular.
Carl: So we’ve talked about the Nuclear Nonproliferation Treaty, known as the NPT. Mhmm. And that’s the treaty that prevents the spread of nuclear weapons and promises peaceful use of nuclear energy and codifies five states that are allowed to retain nuclear weapons.
The TPNW is a different approach. This is sometimes known as the ban treaty. Essentially, what it does is outlaw the use of nuclear weapons and the threat of nuclear weapons. It prohibits their development and possession. It bans the transfer of these weapons. You can’t station them on your territory. You can’t deploy them. Basically, anything related to nuclear weapons, you can’t do it if you sign on to this treaty.
Fin: Uh-huh. And how much of the world has signed up?
Carl: Yeah. So about 140 countries support the TPNW, have either signed it or have said they plan to sign it. Okay. And so that’s about two-thirds of the world right now in terms of the number of countries.
If you look at the population of those countries, it’s probably about half the world’s population.
Fin: Okay. Got it. And I guess I want to know how serious this is. Because I guess, you know, you can sign any treaty you want to make a statement about your intent not to use a nuclear weapon. But how is it enforced or verified if that ever becomes relevant?
Carl: Yeah. So that’s the big problem: this treaty lacks verification and enforcement mechanisms. So on the surface, it doesn’t matter. Right? No state with nuclear weapons is going to join anytime soon. The nine nuclear-armed states and their allies boycotted the negotiations. They pressured other states to abandon the treaty. And each of those states has nuclear modernization programs that are going to stretch on for decades. So there’s no sign that this will lead any country, at least in the short term, to give up its nuclear weapons. The skeptics of the treaty claim that it’s actually worse than irrelevant because it’s going to accentuate tensions, undermine collective action on nonproliferation, diminish alliance cohesion, and potentially establish an alternative to the NPT. They argue that we should not take any steps that might undermine the NPT because this is this bedrock agreement that for 50 years has helped limit the spread of nuclear weapons.
Fin: Got it. You mentioned it might undermine cooperation. Did you have something in mind there?
Carl: I think that, for example, enforcing nonproliferation requires collective action. Right? And if enough states start to see the nuclear nonproliferation treaty as illegitimate, the argument goes, they might not stand behind nonproliferation measures. You can tell that I’m a little skeptical at this point.
Fin: Right?
Carl: Because I think these objections are overstated. We’ve always seen that collective action against nonproliferation is a challenge, with or without the ban treaty. And the treaty was pretty carefully drafted not to conflict with any existing nonproliferation obligations.
Fin: Got it.
Carl: Including the Nuclear Nonproliferation Treaty.
Fin: I see.
Carl: I am sympathetic to the broader concern, which is the treaty doesn’t do much to reduce short-term risks. But that’s not really the point.
Fin: Okay. Well, I’ll ask about that. So I can see the case pretty clearly that this treaty is not really doing much or at least not likely to do much in the short term. It’s not gonna cause nuclear states to give away their arsenals.
Carl: But the idea of large-scale interstate conflict is not as prevalent. So, the hope is that over time, nuclear weapons will become less relevant as we find alternative ways to resolve disputes and address security concerns.
Fin: Right. That makes sense. The idea is to create a world where nuclear weapons are seen as unnecessary and outdated, fostering a culture of peace and cooperation.
Carl: Exactly. It’s about shifting the norms and expectations around how states interact and resolve conflicts. By setting a clear goal and creating a framework for disarmament, treaties like this can help move us in that direction, even if it’s a long and challenging process.
and who have shaped the discourse around nuclear weapons policy. So civil society, including philanthropy, has had a significant impact over the years.
And if you fast forward to today, philanthropy continues to play a crucial role. There are organizations and foundations that focus on reducing nuclear risks and advocating for disarmament. They fund research, raise public awareness, and support policy initiatives that aim to reduce the reliance on nuclear weapons.
Fin: Yeah, and it seems like there are different approaches that philanthropy can take, right? Some might focus on the technical aspects of nuclear weapons, like verification and monitoring, while others might focus on the humanitarian impact or the legal frameworks around disarmament.
Carl: Absolutely. There’s a wide range of strategies that philanthropic organizations can pursue. Some focus on the technical side, supporting research on verification technologies or on the implications of emerging technologies like cyber warfare. Others might focus on the humanitarian consequences of nuclear weapons, working to highlight the catastrophic impact of their use. And then there are those that focus on the legal and policy frameworks, advocating for treaties and agreements that aim to reduce nuclear arsenals.
Fin: Right. It’s a complex landscape, but it seems like there’s a lot of room for impactful work.
And created concepts that went on to be used for the past 70 years. A lot of times, these are people outside of government who have deep expertise, whether that’s technical or political, and can make real contributions. But there are also moments in which citizens, citizen activists, and journalists can shape what’s possible for policymakers. You think about the test ban treaty, which was responsive to public concerns about nuclear testing. Nuclear weapons are not just this alternate domain in which the public has no say. They are kept out of sight.
Fin: Yep.
Carl: But civil society, I think, has a really important role to play, and there are a lot of cases in which philanthropy helped to create pathways to reduce nuclear risk.
Fin: Got it. So you have to try to say that back. It sounds like lots of things you can do outside of a government. You can make contributions to the cutting edge of how we think about strategic considerations. Also, as a scientist in civil society, you can raise the alarm or bring to people’s attention dangers that people weren’t aware of. More generally, you can just raise awareness, like if you’re a journalist, for instance, like in a John Hersey story. You could also protest and petition your government to make policy changes if you think the current track is dangerous. So that’s generally what you can do outside of government. But what about how we’re thinking about spending philanthropic money to make those things happen? How does that translate into deliberate philanthropy?
Carl: Oh, so I’ll give you a few specific examples. In the 1980s, there were exchanges that foundations set up. There were scientist-to-scientist exchanges. You had US and Soviet scientists meeting to discuss possibilities of arms control and risk reduction.
As the Soviet Union started to fall, the American scientists who were in touch with their Soviet counterparts reported a problem they were hearing: there was no money left for the sorts of controls the Soviet Union had on fissile material. Scientists were going unpaid, guards were going unpaid, and this posed a tremendous proliferation risk.
Fin: I see.
Carl: You could see this incredibly controlled infrastructure within the Soviet Union coming apart at the seams.
The risk was that some of these scientists would go work for the highest bidder or that fissile material would go missing. It was a really serious threat, but one that was not yet recognized in the US government. Because of these scientist-to-scientist exchanges, they were able to raise this alarm and start some initial pilot programs to demonstrate what could be done.
Fin: And by exchanges, you mean US scientists speaking to Soviet or ex-Soviet scientists and vice versa. That’s right. Got it. Okay. And then what happened?
Carl: They had these knowledge networks that they had built over the years at levels of trust. Scientists speak the same language, right? US scientists and Soviet scientists had the same technical problems and some of the same bureaucratic problems.
Fin: Yeah. And stupid question.
Carl: They, the foundations, started funding this work in the 1980s once there started to be a little bit of opening and a little bit of daylight in the Soviet Union, and Gorbachev allowed it.
Fin: Okay. Got it. So what did these exchanges lead to, philanthropically speaking?
Carl: They led to some pilot programs, which eventually led to the Nunn-Lugar Cooperative Threat Reduction Program. This encompassed a variety of threat reduction programs in which the US worked closely with the former Soviet states. US taxpayer dollars went to programs to help secure vulnerable materials, to replace gates, to put in modern material protection control and accounting to ensure that scientists stayed employed so they wouldn’t seek employment in North Korea, Iraq, or Iran, and they wouldn’t try to sell highly enriched uranium on the open market. These programs also encompassed biological and chemical weapons. The Soviet Union had a massive secret bioweapons program.
Fin: I think I remember Andy Weber and others talking about his involvement. Incredible story.
Carl: He was there on the ground in Kazakhstan, securing some really vulnerable, dangerous material.
Fin: So there were presumably former Soviet scientists who, before then, didn’t have a clear option. It probably looked like working for the highest bidder, which wouldn’t necessarily have been the best place to work. What did they do once that plan was implemented, or at least what did many of those people do?
Carl: It depends. There were a variety of different overlapping plans. Some of them worked pretty well, others didn’t work that well. But for the most part, we do know the results, which is that very few of these scientists went on to work in other places. We don’t know of significant quantities of nuclear material that went missing. There are a couple of cases of minor theft, and we do know some of these scientists ended up working in other countries. But for the most part, the program seemed to be pretty successful at a pretty low cost compared to US defense spending. Dollar for dollar, I think this was one of the most successful programs the US has ever pursued in terms of security. It required cooperation, deep cooperation between these two former adversaries. It wasn’t easy. There were lots of legal and political constraints. But they found ways to cooperate and reduce this risk.
Fin: Okay. So it sounds like the role of philanthropy there was to fund these pilot programs that came out of these exchanges between scientists. Then the US itself eventually piled into this plan, sparking a larger-scale plan. Coming closer to today, what does the funding picture look like for nuclear philanthropy? Who were and who are the major philanthropic funders?
Well, there’s not a lot of funding, unfortunately. And I think there was a recent report by the Peace and Security Funders Group that put the total figure for all philanthropic funding for nuclear weapons issues at around $50,000,000 a year.
Fin: Uh-huh.
Carl: And that’s just not very much money in terms of philanthropy.
Fin: Oh, yeah.
Carl: If you think about climate change philanthropy, it’s estimated at about $10,000,000,000 per year. So we’re talking 20 times as much money, and that’s just the money on philanthropy going to climate change mitigation. That’s not any of the corporate investments, etc. So compared to the scale of the problem, there are really relatively few people working on this.
And the Carnegie Corporation of New York, where I worked previously, is one of the leading foundations here, and I think they do great work.
Fin: Yep.
Carl: Until recently, the MacArthur Foundation was the largest funder in the world.
Fin: Okay. I see. So yeah. I mean, I heard that the MacArthur Foundation recently withdrew their funding that they were spending on nuclear philanthropy. What happened there? Why did they do that?
Carl: I think it’s a curious decision. In part, because they hired evaluators to take a look at their nuclear work, and those evaluators found significant impact. But they also found that there was, quote, no line of sight to achieve the goal that MacArthur had set for itself, which was a very ambitious goal of eliminating fissile material. They had framed up their nuclear challenges program to be really bold and to conform to a model of placing big bets in philanthropy. I think that approach makes sense for some issues, but I think it’s hard to make a big bet on nuclear policy because success on this issue is so contingent on other factors that are outside of the control of any foundation. In my view, the program was discontinued not because it wasn’t working
But because it didn’t meet the framework that the board set.
Fin: I see.
Carl: I want to be clear here because it sounds like I’m criticizing the MacArthur Foundation. But they have been the most generous funder of this cause, and they achieved so much through that generosity. They have a really dedicated team at MacArthur that’s worked with grantees and provided a 3-year capstone project to support the field. In general, I think they’ve gone about things the right way. It’s just unfortunate that this issue didn’t fit the new framework of placing big bets. There’s a need for organizations and philanthropists to make a commitment that’s not contingent on being able to show year-over-year progress. Because, ultimately, nuclear policy is in the domain of governments. When civil society is effective, it’s often at the margins or in framing the issues that governments work on, but it’s very hard to take credit for any successes.
Fin: Mhmm. And was that the issue with MacArthur, where it was just increasingly hard to point to really concrete outcomes to the board or whatever? And that just made it much harder to justify continuing along their specific kind of goal, which was getting rid of fissile material. Is that the story?
Carl: I think it’s part of the story.
Carl: And if you think about philanthropic impact, there are a few different ways to measure impact, right?
The one that’s easy to measure is if something wasn’t happening and now it’s happening, and that’s an improvement in the world. That’s a great thing. You could take credit for that, right? If there’s a good trend in the world, and you can accelerate that trend, that’s a good thing. You could take credit for that. On the other hand, if you’re facing a really hard challenge and things are getting worse…
Fin: Yep.
Carl: Your investment might slow the rate at which they’re getting worse, or they might keep really bad things from happening. The world is getting more dangerous, but the counterfactual in which you didn’t invest philanthropically is even worse.
I think that’s the world we found ourselves in and the world we find ourselves in now. As a board, it’s harder to disaggregate the impact of your work when you’re seeking to slow or reverse a negative trend.
Fin: I see. Got it. There’s this phrase, I’m sure it gets attributed to about a dozen people, something like, “there’s no limit to what you can achieve if you don’t mind who takes credit.” And I guess it’s something similar here. What matters when we’re talking about impact is the counterfactual, specifically the counterfactual between you doing what you’re doing and you not doing that. Is the world better with you doing it? Often, that does not look like some really obvious big win. It just means that the world is less bad in some respect. It sounds like that is a real challenge when you’re trying to demonstrate to a board or some body of people that you’re making progress. Progress is often less obvious than you might hope.
Carl: Absolutely. I think, specifically, what philanthropy in this space can do is provide an audit of conventional wisdom and keep policymakers and governments accountable, ensuring that the policies we’re pursuing make sense in terms of what they cost and what some of the second and third order consequences might be.
Fin: Mhmm.
Carl: Philanthropy also does an important job in terms of keeping lines of communication open.
There are times when countries don’t want to talk to each other. The political risks of open negotiations are really high, and philanthropy can create alternate channels. Think back to the conversations between US and Soviet scientists. The US government would have had a hard time going in there and having conversations with the directors of the labs at that point. The Soviet leaders who were in charge of those labs were not going to open up their doors to the State Department or the CIA, right?
But they would be willing. The scientists’ communication was able to reveal information that was helpful to both sides. Similarly, when we were at a really dark point in US-Iran relations, the track two diplomacy, these scientist-to-scientist, and former government adviser to former government adviser dialogues helped identify what a solution might look like if we are able to come to a solution.
Fin: Mhmm. Just the pilot program model again. Okay.
So MacArthur withdrew their funding. Sounds like they were the largest funder when it comes to nuclear philanthropy. We can dwell on why they made that decision, but I guess we could also ask the question, why aren’t there five times more funders in the space, or at least five times as much money? It seems kind of confusing to me because it’s not as if nuclear security isn’t a weird or esoteric or controversial issue. Do you have an impression of what’s going on there?
Carl: I think most philanthropists just don’t know how bad the situation is and don’t realize how important the marginal dollar in this space can be. I think that’s a function of a general assumption, which is that nuclear weapons are a relic of the past. This is a problem that we’ve solved. Some people have that view, right? And then other people have a view that this is just unsolvable. There’s nothing that philanthropy or civil society can do on this issue. It’s hopeless, right?
I think both those views are wrong. If you look at the track record of philanthropy, for a relatively small amount of money, these non-governmental organizations, these experts and activists, have been able to hold accountable governments under certain circumstances. They’ve been able to create channels of dialogue that turn down the temperature on some conflicts. Nuclear weapons are with us, and I believe that the risk of nuclear war has increased, in part because of Russia’s invasion of Ukraine, but also as a result of technological trends and an increase in geopolitical conflict more broadly.
I think we are waking up to this problem that’s been lingering just below the surface. We are in a new era of nuclear risk, and the weapons are still here. They never went away. The numbers came down, but there are new risk vectors that we’re facing. The world of philanthropy has moved on, and that’s really alarming to me because we need top-quality analysis of how technological change and geopolitical change is shaping nuclear risk. We need to find some solutions.
Fin: I mean, it sounds like at least a big part of what is going on here is some kind of novelty bias about the problem that a philanthropist might choose to spend on. I guess, in part because we have avoided large-scale nuclear war, the problem is less shiny now. It feels as if it belongs to history, to the Cold War and the Cuban Missile Crisis and so on. That’s kind of sad if that’s true, right? Because you really do want philanthropic spending to be sensitive to these kinds of biases.
Carl: It’s really been 40 years or so since we had a nuclear crisis. You really have to go back to the 1980s to find a time when we were so worried about nuclear war between the US and Russia. You can look at more recent crises with North Korea, and I think people were concerned about those. But this is a different world we’re living in now.
And so you have the people who are making decisions at these key institutions, both in government and on the boards of philanthropic organizations, who have never seen a nuclear crisis before and are just not aware of the level of risks that we continue to run. So I think that’s part of the challenge: people are focused on the new challenges that have come along, and they deserve our attention. But we can’t take our eye off this nuclear threat, which never went away.
Fin: Yeah. That makes total sense to me. So I guess, if we’re trying to address that problem, one thing we can do is to raise awareness about the fact that the risk is still with us, if it is. But it also seems important that there are actually plans that philanthropic money could be spent on to, at least, feasibly reduce nuclear risk. I’m curious to know what kinds of ideas are going around these days which could just be enabled by more philanthropic money.
Carl: Yeah. I think our strategy starts with this view that this is a multi-generational challenge.
And we want to avoid being encumbered by a business-as-usual approach. Our first task, I think, is to fund research that helps us understand the current drivers of nuclear risk. We’ve seen all these big changes in geopolitics and in technology, but governments and NGOs have been slow to update. Some of the work will focus on looking at what levers offer the greatest return on investments and, alternatively, which developments we can safely ignore because they’re overhyped. The second thing is to really look at novel and practical approaches to rebooting arms control and to prevent the development and deployment of the highest risk weapon systems. We could talk a little bit about what the characteristics of those high-risk systems are.
Fin: Yeah. Sure. Why not? I don’t really know what to think about that.
Carl: Yeah. Nuclear weapon systems have different characteristics. The ones you want to avoid are those that compress decision-making time and leave room open for mistake.
Fin: Uh-huh.
Carl: Systems that require delegated launch authority, so you have more fingers on the button, and systems that are ambiguous as to whether they are nuclear or conventional systems.
These are the types of systems that are starting to be fielded in Russia, China, and the US, and it gives me a lot of concern. Take nuclear-armed cruise missiles, for example. Cruise missiles are low-flying missiles that can hit multiple different targets. When you see the launch of a cruise missile, you don’t know what its destination is going to be, so that creates a problem. It has target ambiguity. They also have payload ambiguity, so you don’t know whether that weapon is carrying a conventional warhead or a nuclear warhead. When you have these systems out in the field, you can see a launch and you don’t know whether this is a potentially nuclear weapon headed for a decapitation strike on leadership or whether it’s just an ordinary conventional system. Avoiding the developments of those kinds of systems is really important.
Fin: Got it. It’s very interesting. I think that theme has come up a couple of times in our conversation.
This idea that ambiguity in the weapon system creates a kind of risk because if you are uncertain what kind of threat you’re facing, then you might retaliate when you really didn’t need to. Yeah. Any other kinds of points on this idea of reducing the riskier kinds of weapons?
Carl: Yeah. Systems that threaten the survivability of an adversary’s nuclear arsenal tend to be destabilizing because they will force an adversary either to develop more weapons or to delegate authority for the use of those weapons or to shorten their decision-making process.
Fin: Okay. What would be an example of that kind of weapon?
Carl: Well, a highly precise, stealthy weapon, one that has a very short time of flight or that is stealthy. And a targeting system that allows you to identify where an adversary’s nuclear weapons are. Those things are all inherently destabilizing, as are missile defenses. So in general, defenses sound like a good thing, but if you have missile defenses, that is going to be threatening to the other side because they will fear that you might be able to strike first and retreat behind your shield. So even defenses are inherently threatening.
Fin: Got it. And are there particular kinds of missile defense that seem worth focusing on?
Carl: Well, I think that the current US missile defenses are sufficiently limited that they don’t actually pose a threat to the survivability of the Russian or Chinese arsenals and should not be seen as a threat. I think we’ve talked a little bit about the fear that Russia and China have that the US could take a leap forward in the effectiveness of these defenses.
Fin: I see.
Carl: And I think one area we might see that would be in space-based missile defenses. By putting interceptors in space, you could increase the efficacy of your missile defense system, but then you trigger a whole other competition in space in which the other side now has systems that they are designing and deploying to take out your space-based assets.
Fin: I see. Anti-satellite weapons, I guess.
Carl: Yeah. Exactly.
Fin: How exactly would space-based missile defense work? Are you intercepting ICBMs as they kind of go out of the atmosphere?
Carl: Yeah. There are a few different versions of this, and no one has really made it work yet. But you could imagine lasers in space, which require a big power source to do that. There are different approaches, but this is stuff that the US is pulling off the shelf and investigating again.
Fin: Interesting. Also, just as a side note, I noticed that we haven’t really talked about space-based nuclear weapons, as in just launching a weapon from orbit rather than from land out of the atmosphere and back into the atmosphere. Does that seem like it could happen?
Carl: There’s a treaty that prohibits the stationing of nuclear weapons in outer space.
Fin: Uh-huh.
Carl: So as long as that’s in place, it’s not something that’s likely to be pursued. It’s also not especially effective. It was investigated and not seen as an especially viable option.
Fin: I see. But that treaty doesn’t cover missile defense systems?
Carl: Right.
Fin: Got it. I see.
Carl: There’s concern that China might be developing something called the fractional orbit bombardment system or FOBs, or another one called multiple orbit bombardment system or MOBs. This would involve placing payloads in outer space and could shorten the time of flight for an incoming nuclear device. So that’s something that I think more research needs to be done on. Is this something that China is actually pursuing? And how concerned should we be about the potential for this technology?
Fin: Got it. Okay. So we’ve been talking about plans to really focus on limiting especially high-risk systems. Some examples are where there’s lots of ambiguity, like payload ambiguity in cruise missiles. Also, if you have the possibility of taking out the adversaries’ nuclear capabilities, that seems especially risky. And then also, defense systems that actually work. An example might be space-based defense systems, but there might be other examples as well. So, yes, I wonder if there are any other examples you can give which are different from that in terms of things you’re excited about philanthropy looking into.
Carl: So I think one question we should be looking at is early warning systems.
Fin: Uh-huh.
Carl: Right? We have all of these early warning systems, and the current architecture is not optimized for reliability. These are systems that were built up over time, basically systems layered on top of legacy systems. They have radar, satellites, communications, computer systems. And, again, they’re not optimized for reliability. It’s a variety of trade-offs involved, including cost, speed of deployment, and how well these systems support non-nuclear operations, conventional warfighting.
I think that if the US were to take the approach to optimize for reliability of early warning systems, we would spend a little bit more money but have a much more reliable system. I’m talking about the US here, but what I’m really concerned about are the early warning systems that might be deployed in Russia and China. We just have very low visibility into this. So I think this is an area for potential research, analysis, and contribution. These systems are not going to change in the US unless legislators are aware of this problem and concerned about it. The systems in Russia and China are going to be what they are. But if the US better understands the limitations of those systems, it might be able to avoid taking actions that would create confusion and ambiguity for the Russian and Chinese systems.
Fin: I see. So the US might aim to have some better understanding of other early warning systems to avoid outcomes that are really ambiguous or seem especially risky.
Carl: Exactly.
Fin: Okay. I guess, dumb question, but an early warning system, is this the thing which detects that a nuclear weapon is heading your way and figures out what to do?
Carl: Yeah. So nuclear command, control, communications, intelligence, reconnaissance, surveillance, all these terms are sort of tossed into this big basket with all kinds of different acronyms. It’s about trying to understand what the reality is. Are you under attack and by what? And then communicate that to decision-makers in a really prompt and secure way so that the decision-maker, in the case of the US, the president, has as much time and as reliable information as possible to make the right call.
So it’s not just like one, you know, one thing, like, one radar system. And over time, we’re going to see, and we’ve already seen, artificial intelligence and machine learning integrated into these systems in order to provide better resolution.
And if it’s done well, it will increase the reliability of these systems. The problem, of course, is that AI is notoriously hard to it’s not transparent. And it doesn’t work as well on systems in which you don’t have a lot of training data. Right? So I’m concerned about the types of gaps we might have as we increasingly rely on decision support systems. Mhmm.
Fin: Got it. You know the story about the AI system, which was trained to detect the presence of a tank in different photos?
Carl: Oh, tell me about this.
Fin: The details are very blurry. I’ll link to a proper write-up. I also think that it’s become a bit of an urban myth and the real story is a little more complicated. Anyway, the story is, a Neural Network is trained on pictures with tanks in, and pictures without tanks in, and the hope is that eventually, you can give it a picture, and it’ll tell you. It’ll spot, you know, tanks which are cleverly hidden, and that’s a strategically useful capability to have. Okay, so they train this network, and then they gotta test it, of course, so you give it a bunch of new photos it hasn’t seen, which the researchers know have tanks and a bunch of which don’t. And it gets them all right, flawless. So you know, good news, present this to the Pentagon or whatever. And only later did it turn out that the neural network was not at all looking for tanks. It was looking for photos taken on a cloudy day, and that’s when it said there would be a tank, or maybe it’s the other way around. Because it just happened that all the examples of photos with tanks were taken from cloudy day photos, and all the others were taken on a clear day. And this wasn’t noted until very late on. And okay, maybe it’s a little bit of an oversimplified story, but this kind of data set bias thing happens, and it’s one example among probably quite a few examples of the way that something which seems quite robust initially. Because it’s not very transparent what’s going on, it can fail, and it can fail surprisingly late.
Carl: Yeah. And that’s fine if you have a system, and you can test it out, and you can find the flaws in the system, and you can correct them. Right But this is a system that if it fails once, will have absolutely catastrophic consequences for humanity. And so it seems pretty risky.
And at the same time, I think it will be really seductive because, especially as AI is integrated into all aspects of warfighting, we will come to rely on it. And it will be just the way our systems work, right? There’s this old line that when AI stops becoming surprising, it just becomes software, right?
And all of our systems are going to be using AI in various forms. Some of it is more transparent and explainable than others. And so it’s going to find its way into nuclear command and control, whether we try to prevent it or not.
Fin: Are there any kind of more detailed or concrete suggestions for how to improve existing early warning systems?
Carl: Yeah. There are a few different proposals out there, and there’s not one in particular that we’re pushing for. One of the big changes we’ve seen in the past decade is the ability to send up a lot of small satellites at low cost.
And this can provide a level of redundancy and resiliency that would really strengthen awareness. There are some risks and costs associated with that as well.
Fin: Got it. And these are surveillance satellites? They’re looking back down to Earth and looking for signs of attack?
Carl: Exactly. And just to give a name to that, proliferated low Earth orbit satellites. It’s one technology that we’d want to explore. And, certainly, the Pentagon is taking a look at this, as are others. But there’s always a role for outside analysis, I think.
Nuclear philanthropy
Fin: Okay. Got it. And then, zooming back out again, are there any other ideas or plans that you’re especially excited about when we’re talking about nuclear philanthropy?
Carl: I think it’s really important to be open to international dialogue. And right now, there is very little happening between the US and China, and the US and Russia on the official level. On nuclear issues, China just doesn’t want to discuss anything with the US. And the situation in Ukraine is really bad, so all of the channels of communication with the US and Russia have been shut off. We know that there are some quiet back channels that are happening, which is encouraging. But more broadly, civil society can have a role in trying to create conversations about nuclear risk reduction even when it’s impossible to do that at the track 1 government-to-government level.
Fin: I see. And I guess maybe one example of that is the scientist-to-scientist communications that preceded Danuka. Are there any other examples of what that could look like?
Carl: Yeah. There’s a number of cases of what they call track 2 diplomacy or track 1.5 diplomacy. Track 1 being official to official, track 2 being unofficial to unofficial, and track 1.5 being in between. It was really important in figuring out what the solution set to the Iran nuclear deal might look like. It’s been important in some of the subsequent discussions of follow-on arms treaties between the US and Russia. And so this is a model that’s applied in a variety of areas, including our non-nuclear threat reduction work.
And it’s hard to measure. It’s hard to attribute success in part because the ground rules of this are very secretive. But it’s important that there are these lines of communication.
Fin: Sure. I’m trying to imagine, so if I am a philanthropist, I really like the idea of this type 1.5 and type 2 diplomacy happening more often. In other words, people from different powers communicating. What can I do to make that happen more? How do I support that?
Carl: I think it’s hard to do it from afar. You kinda need to get into the weeds a little bit. Yeah. And that’s why groups like Longview Philanthropy and the Ploughshares Fund allow you to benefit from having some expert staff on board who know who the players are and can try to identify the best opportunities.
Fin: Okay. Cool. So we’re talking about concrete ideas in nuclear philanthropy. You mentioned researching drivers of nuclear risk. Then we talked about just reducing the risk of nuclear war this decade. Is there anything else you’re interested in working on?
Carl: I think more broadly, we want to see a shift in societal attitudes towards nuclear weapons.
We know that laws and values shape policy choices. And if we continue to see nuclear weapons narrowly through the lens of a national security issue, we are going to take steps that put us at risk. Ultimately, these devices pose a threat to our existential security.
And we need to start thinking about how to change the way that these weapons are seen. So we’ve talked a little bit, for example, about the treaty on the prohibition of nuclear weapons. That’s one way to do this. There are others as well. There are zones that are free of WMD, right?
Fin: Uh-huh.
Carl: There are laws that can be passed, and there is work that can be done to shift culture.
And so I think we’re going to take a look at a few of these different options. But I think there’s a pretty unique moment right now, which is Russia has invaded Ukraine, and they’ve made nuclear threats in Ukraine.
Fin: Mhmm.
Carl: And I think this provides an opportunity to push back against that and to strengthen norms against nuclear use and nuclear coercion.
Fin: Mhmm. Okay. So I guess to try summarizing then, we have research to really understand drivers of nuclear risk. Turns out, we just don’t understand enough, given how much has changed. Secondly, just reducing short-term risk, nuclear war, reducing especially risky, or the challenge that especially risky systems get used. Shifting attitudes, we’re just talking about. Is there anything that we’ve left off? Anything else that you’re focused on?
Carl: I’ll just say that there has been attention to nuclear issues in governments. But over the past couple decades, the focus has been on preventing the spread of nuclear weapons. So you see issues like North Korea or Iran sucking up all the oxygen in the room. And there’s also been a real focus on keeping terrorists and non-state actors from acquiring nuclear weapons.
And so large portions of the field are still focused on that issue set. And those are really important challenges.
Carl: Don’t get me wrong. But in some ways, we’ve lost sight of what distinguishes nuclear weapons, which is the potential use of nuclear weapons by states that have large nuclear arsenals.
The use of nuclear weapons by such states would have extraordinary consequences for humanity. Right now, 90% of the nuclear weapons in the world are in the hands of the US and Russia. We are in a period of transition because China is rapidly increasing its nuclear arsenal.
By the best estimates, we expect them to have over 1,000 nuclear weapons within a decade or so. That’s a major shift. You’ll have these very large arsenals in three countries. While the use of a nuclear weapon or two by North Korea would be devastating, it’s not going to have the kinds of civilization-threatening effects that a major nuclear exchange between the US and Russia or the US and China could have.
Fin: Yeah. And, Chris, obviously, zooming out of the actual content of the work, how is your challenge or the challenge when it comes to nuclear weapons different from your view of fields like biosecurity or AI safety and AI governance?
Carl: I think biosecurity has a lot of similarities in that this is a technology that is dual-use. The worst-case projections for what we might see in terms of engineered pandemics pose a really grave threat to humanity.
The technology is advancing pretty fast, which means that the barriers to creating that kind of massive harm are reduced each year. This is an area that requires a lot of attention and investment. I’m really glad there is philanthropy dedicated to this. I wish there was more government funding, frankly. It’s a pretty sad state of affairs when you have a massive pandemic that just shows the tip of the iceberg of how bad things might be, and you still can’t get bills through Congress to do common-sense things and increase preparedness. That’s really depressing to me, but I have a lot of admiration for the work I’m seeing within the EA community and other aligned communities to try to think seriously about pandemic prevention. There’s a technical, scientific component to this, and there’s also a policy component. It involves international governance and corporate actions. In those ways, there are a lot of parallels to the challenge posed by nuclear weapons.
AI is trickier because we’re just on the cusp of this transformational technology, and there’s really no consensus about what the level of risk is, how fast it’s coming, and even whether there’s a risk at all. We know that AI is going to reshape the worlds of business and national security, but we don’t know how. There’s a lot more uncertainty. The error bars for what this threat might pose to humanity are much wider.
That makes it more concerning in some ways because we have pretty clear error bars on nuclear. We know that a nuclear war would be really, really bad, but probably would not threaten all of humanity.
with all of these really important questions that I had never considered before. And I was just hooked. I wanted to learn everything I could about it. It felt like a really important issue that wasn’t getting enough attention. And so, that was the start of my journey into this field.
Fin: That’s fascinating. It’s amazing how one class or one professor can have such a profound impact on your direction in life. It seems like it was a pivotal moment for you.
Carl: Absolutely. It was definitely a turning point. I think there’s something about being exposed to these big, existential questions that just captured my imagination and my sense of responsibility. I felt like I needed to be part of the conversation and try to contribute in some way.
Fin: Yeah, that makes a lot of sense. It’s interesting to hear about the personal stories behind the work people do, especially in fields that deal with such significant and complex issues.
Carl: Definitely. I think having that personal connection to the work can be really motivating. It helps keep you going, even when the work is difficult or when progress seems slow.
In my house that I had never seen before. I knew about nuclear weapons. I had assumed, like most people, that they were no longer such a big deal, and it’s probably impossible to solve them. But learning about this doomsday machine that we had assembled and continued to operate was really this wake-up call, and I realized that other people were not thinking about this issue, and it seemed incredibly neglected and an opportunity to have a consequential career. And so I continued to study nuclear weapons, and I wrote my dissertation on nuclear policy. I went back to grad school. I did a lot of other things in between, but I kept coming back to this issue because, fundamentally, I’m an optimistic person. And I look around at the changes in society. And I think there’s an incredible pessimistic bias towards everything happening in the world. But I think, broadly, things have gotten better for more people in the past 4 generations. And most people have the opportunity to live a better life than their grandparents lived. And that has been true for some time. And one of the threats to that story is nuclear weapons. It’s one of the ways in which things could go terribly wrong in the course of 15 minutes. And we don’t need to continue to live this way. So for me, it seemed something worth working on. And I’ve continued to work on it for that reason.
Fin: Wow. That’s a great story. I had no idea that you’d been told by Jonathan Schell. I should say that. Yeah. So, I know that the book, The Precipice, was really, probably inspired by his book, The Fate of the Earth. As far as I can tell, it was really the kind of first book talking about these existential, civilizational questions that are raised by nuclear weapons. In particular, you know, talking about something like human extinction.
Carl: Yeah. I think it’s a really important book. He was a great writer. You know, at the time, we didn’t know as much about the science as we do now, so certain things in that book he gets wrong, understandably. But I think the broad principle, which is that some technologies threaten not just people, but the human story, and threaten everything we care about for that reason.
Fin: This full stop at the end of a story, which could have caught on much longer. You also mentioned this. Yeah. Talking about how life has gotten better, as a kind of motivation for making sure it can, you know, keep getting better. Like, I kind of like that. It’s like a more hopeful framing. Right? Like, things can go terribly wrong. That’s still the case. But if they don’t, it’s also the case that things can go amazingly well, or at least just continue to get better and better. And that’s also a motivation for avoiding these terrible outcomes.
Carl: Yeah. This interview has been something of a downer because I’m always talking about things I’m worried about, and you must think I’m really a stressed-out person all the time. I don’t wanna stress out your listeners either. Because I think the risks of nuclear war are low.
And that they are low because we’ve done a pretty good job of managing this technology. You think about 77 years living alongside nuclear weapons and the destructive power of even a single nuclear weapon, and we haven’t seen the use of a nuclear weapon in war since 1945. The weapons have only spread to 9 countries. Name one other technology that’s that old that has not spread everywhere. It’s a story about really good policy making, about civil society, about international law. I feel like we are on a path to managing these risks and to avoiding war and a war in which nuclear weapons are used. None of this is inevitable.
But we are much less safe if we stop working on these problems. We’ve sort of forgotten that this issue exists for some time. We need to revitalize the community of concern and the community of practice around nuclear risk reduction, and that would make us all safer.
Fin: This stuff is really complicated, right? But I’m getting some sense that, like, yeah. There’s a real motivation of hopefulness, which is just some sense that it at least still seems possible to actually make things better. It’s not as if there is any kind of sense in which we’re just destined for destruction. In fact, maybe the overall risk is actually relatively low over certain time frames. It’s just like, we can make it even lower in that case. The question is how much can you make things better rather than are we never to be doomed, or is this not a problem at all? And that’s the only options, right?
Carl: Exactly. I feel like if we fall prey to that dichotomy, it’s not a helpful way to look at the world. Humans are inherently conflictual, and they’re inherently cooperative. We’ve found ways to manage our differences, and I think we’re getting better at it, in general.
The problem is that our capacity to do harm to one another has also increased so dramatically. So it’s a race. It’s a race between our tools to resolve conflicts and our tools to cause harm.
Fin: Got it. And it’s a race whose winner is not yet guaranteed. So there’s work to be done. Yeah. Okay. A question we ask all our guests is, can you recommend 3 or roughly 3, books or anything else like films, for people who might want to learn more about everything we’ve been talking about?
Carl: Yeah. I’d be happy to recommend a few things. The first is this article and book, Hiroshima, that we’ve been talking about by John Hersey. It’s just an incredible piece of reporting. It’s beautifully written. It’s human, and you can access it freely now. The New Yorker has made it available online. So maybe you could provide the link to that.
Fin: Absolutely.
Carl: I think there are 2 really good accounts that are written by journalists about the nuclear age that give a very clear, concise, and readable picture. They tell great stories. One is The Bomb by Fred Kaplan, and the other is The Dead Hand by David Hoffman.
And if you’re looking for a very readable introduction to these issues, I can’t think of a better place to start. We’ve mentioned this book, “Fallout,” by Leslie Bloom. It talks about the story of Hiroshima. It’s a really good read. It’s won several awards, and I would recommend that to anyone listening to this podcast. Then there’s a podcast series, which is really well done. It’s something I actually funded when I was at the Carnegie Corporation of New York, and I think they hit it out of the park. It’s called “A Most Terrible Weapon,” and it’s by War on the Rocks, which is an online National Security Journal. It goes in-depth into some of these topics that we’ve covered. I think it’s just a very good listen for folks who like podcasts.
Fin: “A Most Terrible Weapon.” Awesome. Excellent. Those are fantastic. We’ll put links to all those things on our website. We’ve talked about an awful lot in this conversation, but I wonder if you could throw out just one or two very specific research projects that you’d be excited about. Maybe someone who’s listening to this could pick up and work on.
Carl: I think one of the big questions we face is how will artificial intelligence be incorporated into nuclear weapon systems and into the decision-making systems around nuclear weapons. And that’s a topic that good work needs to be done on.
Family sabbaticals
Fin: Okay. And here’s a final question. I understand that you and your family took a family sabbatical, and I really want to know what that involved, and should more families consider doing it?
Carl: Yeah. So we just got back from a year of travel, which we’ve been planning for something like 15 years. Before my wife and I decided to have kids, we took a one-month trip to India. While we were there, we met a family that was traveling, and they had three daughters. They were just so poised and worldly. We looked at each other and said, when our kids get to be the right age, we’ve got to find a way to do this.
The timing was never right, so we postponed it and postponed it. We felt like we were getting to this window where if we didn’t do it now, it was never going to happen. So I actually left my job at Carnegie, which was my dream job, and we went on this year of travel. We visited nine different countries and lived in several of them for longer stretches.
Some of the highlights were living in Kenya, outside of Nairobi, living in Mexico City and Madrid, visiting Istanbul and Berlin, and visiting my friend in Egypt, and getting to see Egypt come alive at Ramadan. It was just a really magical year for us as a family. We came away with all these family stories, myths, and memories that are now part of our family story. It was also, for me, a real opportunity for growth in terms of stepping away and trying to think about how I want to spend this next chapter in my life. Mhmm.
I’ve never talked to anyone who’s taken a family sabbatical year and regretted it. Wow. I think there are lots of different ways to do it. We were traveling. We went to the Dominican Republic and attended this world school support group that helped people who are world schooling. We realized there are all these different ways to make it happen. Some people work full-time. Some people travel on savings.
Some people were able to make a career out of it. Right? It was really inspiring to see all these people who were taking chances and trying to do something that is a little different from what’s expected of us.
Fin: Yeah.
Carl: Loved it. It was a pretty great year.
Fin: Are there any, if someone is like, wow, this didn’t feel like an option before, but I guess it’s a thing that people are allowed to do. Any resources, websites, books, that kind of thing that you could point people towards?
Carl: Yeah. So there is a Facebook group about world schooling with something like 45,000 members.
Fin: Uh-huh.
Carl: That’s a good place to start if you’re on Facebook. There are a couple of books that have come out this past year. One was “How to Be a Family” by Dan Kois, where he describes his family living in four different places. I think it’s easier now than it has been.
Travel is less costly in many ways, not financially, but in terms of being able to find reliable Wi-Fi, to work from anywhere, or to educate your kids from anywhere.
Fin: Yep.
Carl: The barriers to entry are a lot lower than what they might have been.
If you want to make it happen, you can find a way to make it happen. I’m pretty confident of that. It might not be a year, but it doesn’t have to be a year. You can do it for a few months at a time and see how you like it. I think our kids had some lost learning from what they would have gotten in a standard school with consistency over the course of the year. But I think the benefits of the perspective and meeting friends from around the world, many of whom we’re still in touch with, really outweigh that.
Fin: Yep. I love it. Well, I hope this conversation maybe causes a counterfactual family sabbatical.
Carl: I will say too, it’s really great for people who are working on abstract issues where your day-to-day is something of a grind. It’s hard to quantify. It’s hard to measure progress. Being out in the real world is a reminder of all the different ways of knowing and being out there and is a reminder of the privilege it is to do meaningful work. Right? We lived outside of Nairobi and would drive through these tea fields and see the folks there who were picking tea for their livelihood. They weren’t paid much at all. It’s just a kick in the ass to say, hey, I’ve got this big important project. If these people can spend all day picking tea, I can make some progress on it.
Fin: Yep. Love it. Well, on that wonderful note, Karl Robicheaux, thank you very much.
Carl: Thank you. I really enjoyed this.
Outro
Carl: That was Carl Robichaud on reducing the risks of nuclear war. If you find this podcast valuable in some way, one of the most effective ways to help is just to write a review wherever you’re listening to this, like Apple Podcasts, Spotify, wherever. You can also follow us on Twitter. We are just @hearthisidea. I’ll also mention that we still have a feedback form on our website. We read every submission. You’ll receive a free book for filling it out, and you can find that on our website, which is just hearthisidea.com. Okay. As always, a big thanks to our producer, Jasson, for editing these episodes, and thank you very much for listening.