M

This Discussion is now a Draft.

Once it's ready, please submit it for review by our team of Community Moderators. Thank you!

Pending

This Discussion now needs to be approved by community moderators.

Intersections between nuclear risk and AI

Flourishing Futures Nuclear Risk Tournament

Several questions in this tournament are related to the intersection of (a) nuclear risk and (b) the development, deployment, and governance of AI technologies. This discussion post lists those questions and then provides some quick context and links to further reading.

(Note that there is much about the intersection of nuclear risk and AI that is not covered by this tournament's questions or by this post, and that this post was written quickly and is far from comprehensive.)

Relevant questions

Some context

The final report of the National Security Commission on Artificial Intelligence states:

"While it is neither feasible nor currently in the interests of the United States to pursue a global prohibition of AI-enabled and autonomous weapon systems, the global, unchecked use of such systems could increase risks of unintended conflict escalation and crisis instability. To reduce the risks, the United States should (1) clearly and publicly affirm existing U.S. policy that only human beings can authorize employment of nuclear weapons and seek similar commitments from Russia and China; (2) establish venues to discuss AI’s impact on crisis stability with competitors; and (3) develop international standards of practice for the development, testing, and use of AI-enabled and autonomous weapon systems.

[...] The United States should make a clear, public statement that decisions to authorize nuclear weapons employment must only be made by humans, not by an AI-enabled or autonomous system, and should include such an affirmation in the DoD’s next Nuclear Posture Review. This would cement and highlight existing U.S. policy, which states that '[t]he decision to employ nuclear weapons requires the explicit authorization of the President of the United States.' It would also demonstrate a practical U.S. commitment to employing AI and autonomous functions in a responsible manner, limiting irresponsible capabilities, and preventing AI systems from escalating conflicts in dangerous ways. It could also have a stabilizing effect, as it would reduce competitors’ fears of an AI-enabled, bolt-from-the-blue strike from the United States and could incentivize other countries to make equivalent pledges.

[...] The United States should also actively press Russia and China, as well as other states that possess nuclear weapons, to issue similar statements. Although joint political commitments that only humans will authorize employment of nuclear weapons would not be verifiable, they could still be stabilizing, responding to a classic prisoner’s dilemma: as long as countries have confidence that others are not building risky command and control structures that have the potential to inadvertently trigger massive nuclear escalation, they would have less incentive to develop such systems themselves. While this norm is widely accepted in the United States, it is unclear if Russia and China share the same strategic concerns. Public reports indicate that Russia previously installed a “dead hand” system to automate nuclear launch authorization, and China’s representatives in Track II dialogues with the United States have been hesitant to state that China would make an equivalent commitment. If neither Russia nor China is willing to agree to such a proposal, the United States should mount a strong international pressure campaign to condemn this decision and highlight how Russia and China refuse to commit to responsible military uses of AI."

(The Nuclear Posture Review is "a document that lays out an administration’s approach to US nuclear weapons policy. It includes thinking on the overarching question of what role nuclear weapons should play in US security, as well as setting out corresponding strategy, doctrine, and force structure.")

Along similar lines, Future Proof, a report aimed primarily at UK policymakers, makes the following recommendation:

"Ensure that the UK Government does not incorporate AI systems into NC3 (nuclear command, control, communications), and that the UK leads on establishing this norm internationally" (p27).

The footnotes to the above-quoted sections of the National Security Commission on Artificial Intelligence's final report are also relevant. For example, one says:

"There could be other reasons countries may delegate nuclear weapons launch authority to autonomous systems, particularly if leadership trusts machines to execute launch orders more than humans. A political agreement is unlikely to be able to address these concerns, although offering it would highlight how other nations are engaging in irresponsible and dangerous behavior."

Some further reading

Categories:
Artificial Intelligence
Computing and Math
Nuclear Technology & Risks