When Henry Kissinger died in November last year, I was consumed by one big question: why did this living witness to U.S. foreign policy during the Cold War era spend much of his final years focusing on artificial intelligence and the future of humankind instead of reflecting on his bigger trajectory?
To illustrate, “The Path to AI Arms Control: America and China Must Work Together to Avert Catastrophe” was the title of his last article published in Foreign Affairs magazine just one month before his death. In an Economist interview earlier in May, he emphasized that the fate of humanity depends on whether America and China can get along. He believed the rapid progress of AI, in particular, leaves them only five to ten years to find a way. This theme was elaborated in his de facto final masterpiece in 2021, “The Age of AI: And Our Human Future,” co-authored with Eric Schmidt, former Google Chairman.
Has he made an overstatement, when humanity is enjoying many benefits and opportunities offered by AI along the lines of several historic technological innovations like the printing press, electricity and the internet? Not really. What struck me was his intellectual foresight and some of his policy suggestions that impacted the thinking of the U.S. administration and AI industries, including the creation of the U.S.-China intergovernmental dialogue on AI, the 2021 Final Report of the National Security Commission on AI (NSCAI), 2024 Vision for Competitiveness and the necessity of a global AI order.
Though he was a proponent for U.S. strategic interest in the AI era, he warned cool-headedly of the potential dangers of AI, especially the most catastrophic aspect of uncontrolled AI: the integration of AI with weapons of mass destruction capabilities of big powers. His fundamental question was whether machines with superhuman capabilities would threaten humanity’s status as master of the universe with an unprecedented level of destructiveness.
In this context, what we are witnessing in the ongoing war between Ukraine and Russia and between Hamas and Israel is only a small prelude to a new kind of AI-enabled machine war, signaling the change of future warfare.
While the uses for AI in military contexts are seemingly endless, some of the more visible, established applications of AI today are in the areas of intelligence, surveillance and reconnaissance (ISR), cyber, autonomous systems and vehicles, command and control, disaster relief and logistics.
Particularly since the launch of ChatGPT, government officials, business leaders and technologists have all warned of coming large-scale risks from rapidly advancing AI systems — spanning AI-enhanced bioterrorism, AI-enabled nuclear command and control gone awry, runaway AI hacking and more.
Retired U.S. Army Gen. Mark Milley, Chairman of the Joint Chiefs of Staff until last year, went further by predicting two months ago, “10 to 15 years from now, my guess is maybe 25 percent to a third of the U.S. military will be robotic.” He envisioned these robotic forces being commanded and controlled by AI systems, suggesting a potential shift in the human role on the battlefield and security paradigm.
As the world grapples with the rapid advancement of technology, the debate surrounding AI-enabled warfare is now being paralleled by steady efforts to address governance deficits at national, regional and global levels.
Since last year, we have witnessed some meaningful progress in dealing with military AI governance issues. Major leadership came from the Netherlands, South Korea and the United States, in addition to some AI safety-related initiatives led by Britain and South Korea as well as two U.N. resolutions on civilian AI led by the U.S. and China, respectively and adopted by consensus a few months ago.
The first major breakthrough was made by the Netherlands and South Korea, who co-hosted the first Responsible AI in the Military Domain (REAIM) Summit at The Hague in February 2023. It adopted a REAIM Call to Action, which, among other things, agreed to continue this global dialogue on REAIM in a multi-stakeholder and inclusive manner.
Just three weeks ago, South Korea, along with the Netherlands and three other countries, successfully hosted the second REAIM Summit in Seoul, where they adopted the REAIM Blueprint for Action. Building on the Call to Action, this blueprint outlined key guiding principles for REAIM, emphasizing ethical, human-centric and international law-compliant applications, as well as appropriate human involvement throughout the life cycles of military AI. It particularly highlighted the synergy and complementarity between the REAIM Summit process and other related initiatives, such as the REAIM Global Commission, the U.N. Group of Governmental Experts on Lethal Autonomous Weapons Systems and the U.S.-led Political Declaration on Responsible Military Use of AI and Autonomy.
Notably, the creation of an independent track 2 REAIM Global Commission aims to facilitate global dialogue and recommend a final international governance framework for REAIM by the end of next year. Its successful coordination with the REAIM Summit in Seoul earlier this month bodes well for a challenging multi-stakeholder dialogue.
This week, the U.N. Secretary-General António Guterres held the Summit for the Future, and one of its outcome documents, the Global Digital Compact, was adopted by consensus. The Compact aims to enhance international governance of artificial intelligence for the benefit of humanity by harnessing AI’s benefits and mitigating its risks, in full respect of international law and considering other relevant frameworks.
It is no wonder that the Seoul REAIM Summit’s Blueprint for Action concludes with a commitment to establish “responsible AI for the future of humanity.” As co-chair of the REAIM Global Commission, I am happy to share that the Commission is fully dedicated to contributing to this noble endeavor.
Korea’s balanced leadership role in REAIM will not only manifest its Global Pivotal State initiative but also help prevent Kissinger’s fears of the Oppenheimer Moment in AI from becoming a reality.
Link: Recent op-ed by GC REAIM Co-Chair Byung-se Yun in the Korea Times