By Sofia Guerra, Fall 2024 Marcellus Policy Fellow

The rapid integration of artificial intelligence (AI) into nuclear command, control, and communications (NC3) systems presents both opportunities and risks. While AI can enhance decision-making and operational efficiency, it also increases vulnerabilities, such as automation bias, miscalculation risks, and compressed decision timelines. In the U.S.-China context— marked by mutual mistrust and strategic competition—these risks are particularly acute and heighten the potential for unintended escalation.
This paper assesses the limitations of current safeguards and argues that codifying “human-in the-loop” (HITL) oversight into U.S. law is essential but insufficient. To address the broader risks, a multifaceted strategy is proposed: passing legislation to prohibit fully autonomous nuclear weapons systems, investing in AI safety research, and pursuing confidence-building measures with China. Complementary multilateral initiatives, such as joint missile notification systems and agreements on AI governance, are also critical to stabilizing the evolving security environment.
By addressing these challenges with a comprehensive governance framework, the United States can lead efforts to mitigate the risks of AI-driven escalation while fostering stability in U.S.-China relations. This approach balances innovation with security, ensuring that technological advancements in NC3 systems enhance rather than undermine global stability.