Your Notebook is now a Draft.
Once it's ready, please submit it for review by our team of Community Moderators. Thank you!
This content now needs to be approved by community moderators.
This essay was submitted and is waiting for review.
How I Learned to Stop Worrying and (Sort Of) Love Nuclear Forecasting
James M. Acton holds the Jessica T. Matthews Chair and is co-director of the Nuclear Policy Program at the Carnegie Endowment for International Peace.
Last March, as Russia’s full-scale invasion of Ukraine was faltering and President Vladimir Putin was issuing nuclear threats, I tweeted my skepticism of efforts to estimate the likelihood of nuclear use.
This skepticism did not derive from any doubt about the value of forecasting in general. Perhaps because my background is in quantum theory, I’ve long felt comfortable dealing in probabilities. Moreover, I’ve organized large-scale forecasting events at the Carnegie International Nuclear Policy Conference that asked the assembled experts to estimate the likelihood of North Korean nuclear testing and the United States’ withdrawing from NATO.
Forecasting actual nuclear weapon use, however, seemed fundamentally different. Thankfully, there has only been one war in which nuclear weapons were used, and it has few lessons for understanding nuclear escalation in the contemporary world. Apart from anything else, prior to August 6, 1945, Japan didn’t even know what a nuclear weapon was—let alone that the United States possessed two of them—so it wasn’t exactly in a position to decide whether to yield to the nuclear threats that Washington didn’t make. There have been crises serious enough to have plausibly resulted in nuclear use—but only a very few. As a result, trying to forecast nuclear use seemed to me a bit like trying to forecast the results of contemporary presidential elections with no more data than the outcomes of the contests prior to the overhaul of the U.S. electoral system in 1804. Given this Knightian uncertainty, I advocated for focusing on identifying policy approaches that would reduce the likelihood of nuclear use (even as I shied away from estimating the magnitude of any reduction).
The last year of the Ukraine war has brought home to me that I was wrong—not so much about the difficulty of forecasting nuclear use, but about the value of the exercise.
Throughout the war, as U.S. President Joe Biden himself has acknowledged, U.S. policy toward Ukraine has been shaped (sensibly, in my opinion) by concern about nuclear escalation. Underlying this concern are assumptions, probably largely implicit, about the likelihood and consequences of nuclear use. Given, therefore, that a form of forecasting has effectively occurred within the U.S. government—indeed, has to occur—it is better that the process be formalized so that assumptions can be more rigorously stated, tested and debated, and best practices brought to bear.
Any course of action toward Ukraine carries some risk of nuclear escalation. I’m someone who worries that Russia may use nuclear weapons in this war, particularly if Ukraine attempts to take back Crimea (which Putin almost certainly values much more than the four newly annexed territories). The United States and its allies could virtually eliminate this particular escalation risk by entirely withdrawing support for Ukraine. However, not only would such a policy be entirely immoral, but it would exacerbate a different nuclear risk—an emboldened Putin might attack a NATO state and precipitate a direct U.S.-Russian conflict with the potential to go nuclear. To complicate matters further, the consequences of nuclear use are unlikely to be the same in each scenario: Armageddon would be more likely to result from Russian nuclear use in a future war against the United States and NATO than in the current war against Ukraine.
In navigating these risks, the absolute likelihood of nuclear use—and not just changes in likelihood—really matters. If you believe that there is, say, a 0.01% chance that Russia will use nuclear weapons if Ukraine tries to take back Crimea, then it’s quite rational to support providing Ukraine with the equipment and materiel to launch an operation against the peninsula (unless, that is, you’re very pessimistic about managing nuclear escalation and assess that first use is almost certain to result in an all-out nuclear war). Conversely, if you believe that the probability of Russian nuclear use in this scenario is 10%, then, equally rationally, you’re much less likely to support the provision of such aid.
From my experience, most government officials and nuclear policy analysts do not assign probabilities in this kind of explicit way. However, there is evidence that they are guided by mental models of escalation risks that rely on assumptions about the probability and consequences of nuclear use.
In their 1989 book about the Cuban Missile Crisis, On the Brink, James Blight and David Welch describe the intra-governmental debate among the “hawks” and the “doves” about whether to conduct airstrikes on the Soviet missiles in Cuba. The hawks treated the probability of escalation after such airstrikes as if it were zero. For example, Douglas Dillon, the secretary of treasury, later stated that “we weren’t nervous.” By contrast, the doves treated the consequences of a nuclear war as effectively infinite and focused on avoiding this outcome. Secretary of Defense Robert McNamara rejected his normal “probability logic” in favor of “possibility logic,” arguing that “[i]f…only a few—maybe even only one bomb—gets through to destroy an American city…you…will have to shoulder the responsibility for the worst catastrophe in the history of this country. So you won’t do it.”
I suspect that a similar good-faith debate has played out over the last year among senior officials in the Biden administration. Some of them probably treat the probability of nuclear war as vanishingly small and urge greater support for Ukraine. Others likely focus on the consequences of a nuclear war and advocate caution. Moreover, these policymakers likely scrutinize and debate one another’s assumptions—but I doubt they try to apply a quantitative forecasting methodology.
Perhaps they should. The process of forecasting—identifying sequences of events that could lead to nuclear use and assigning probabilities—would provide a systematic way to allow policymakers to identify each other’s assumptions and hence understand why they disagree. Such understanding, in turn, would allow them, for example, to task the intelligence community more effectively, ultimately improving the quality of the advice they provide to a president.
Moreover, I’ve also come round a bit on the value of the outputs of such forecasting. Predicting the likelihood of nuclear use is a difficult and fraught exercise, to be sure, and I quail when I see probabilities quoted to two or three significant figures. Nonetheless, a recent Metaculus exercise, in which I was asked to assign probabilities to various outcomes in the Ukraine war, helped convince me that nuclear-use probabilities can be meaningful. (Full disclosure: I received an honorarium for participating.)
In my view, if the war against Ukraine turns nuclear, it will very likely (more than 90%) be because Ukraine launches a campaign to take back Crimea. As a result, the following two questions offer an entry point into forecasting:
- What is the probability that Ukraine will try to take back Crimea (with and without a significant increase in foreign military supplies)?
- If Ukraine does try to recapture Crimea, what is the probability of Russian nuclear use?
To factor in the consequences of nuclear use, a third (and more difficult-to-answer) question could be helpful:
- In the event of Russian nuclear use, what is the probability that the war will escalate further, leading to nuclear attacks against urban targets (including the legitimate military targets, such as ministries of defense, that are located in cities)?
Forecasting experts may quail at the shortness, simplicity, and vagueness of this framework—and fair enough. But its shortness, simplicity, and vagueness are deliberate. Senior, and exceptionally busy, government officials have neither the time nor inclination to study a probability tree with 10 or 20 steps and detailed definitions for every term. But I could imagine them privately and profitably debating this set of questions (well, the first two at least) and how the probabilities might change depending on U.S. policy. (This is not to suggest that forecasting can somehow allow the optimal policy to be calculated—not least because a multiplicity of factors need to go into decision-making—just that it is one useful tool among many.)
Of course, experts and analysts can and should develop more complex models that contain precise definitions and break down escalation sequences into many more decision points than I have done. To aid in this endeavor, I offer three suggestions to help ensure that such work is both useful to policymakers and responsible.
First, recognize what decisionmakers care about. Most decisionmakers, I suspect, doubt their ability to control escalation within a nuclear war and seek to avoid first use. Understanding whether their skepticism is well-placed is an important long-term research question. In the short term, however, the officials responsible for managing U.S. policy toward Ukraine are likely to be most receptive to tools that help them assess the probability of first use (conditional upon various courses of action, such as increasing aid to Ukraine) rather than, say, the likelihood of deaths due to a nuclear explosion in London.
Second, be honest about uncertainty. A lack of relevant empirical data makes forecasting nuclear use a particular challenge. I strongly doubt that Putin knows whether he’d order the use of nuclear weapons if Ukraine tries to recapture Crimea, so we should be modest in our ability to estimate the likelihood of this outcome. It’s important to convey this uncertainty—ideally in the form of quantitative estimates—alongside forecasts.
Third, focus on forecasting nuclear use in specific scenarios. Part of the reason why I have come round to such forecasting in the context of the war against Ukraine is because (for now at least) the main escalation risk is clear: a Ukrainian attempt to retake Crimea. This danger was far from obvious a priori, however. Before the war, I judged the main risk to be direct NATO intervention; indeed, that surely was the war’s main risk in the first month or two. The difficulty of identifying the most likely escalation pathways in advance of a conflict—and the potential for the dangers to shift over the course of a conflict—leaves me doubtful about the value of trying to estimate, say, the probability of a NATO-Russian nuclear exchange at some time over the next 50 years.
With U.S-Chinese and U.S.-Russian relations becoming increasingly conflictual, existing cooperative security arrangements, such as hotlines and arms control agreements, are likely to become less effective, while new agreements are unlikely to be possible to negotiate. In this environment, avoiding a nuclear apocalypse will depend on increasing unilateral approaches—prudence by key decisionmakers in Beijing, Moscow, and Washington in managing escalation risks in peacetime, crises, and perhaps even conventional wars. Over the last year, I have come to believe that forecasting has the potential to be genuinely useful to them. For it to live up to this potential, however, not only will decisionmakers and their advisors need to become more open to adopting new tools, but the forecasting community will have to adapt its approach to meet the realities of the policy-making process.
Once you submit your essay, you can no longer edit it.