Jesse Kommandeur joins HCSS as Strategic Analyst

HCSS is pleased to announce that Jesse Kommandeur has joined us as Strategic Analyst!

At HCSS he will leverage his multidisciplinary expertise in Business Administration and Data Science to bridge the gap between quantitative and qualitative research. His role supports HCSS’s research agenda, focusing on the interplay between technology, society, and policy.

Jesse holds two BSc degrees in Business Administration and Information Sciences, alongside an MSc in Data Science (cum laude) from the University of Amsterdam. For his bachelor thesis, he explored the socio-technical and socio-economical dynamics of drug trafficking in the Port of Rotterdam, developing multi-agent system models to analyze current challenges and propose future solutions. Additionally, he has worked on end-to-end research projects with a data-driven approach for stakeholders such as the Port of Rotterdam and the Police Academy.

Before joining HCSS, Jesse worked at the University of Amsterdam for over 3 years where he lectured 300+ students at the Institute of Informatics and guided 30+ research projects, many of which were focused on trends in digital societies and the threads of emerging technologies.

Pieter-Jan Vandoren joins HCSS as Strategic Analyst

HCSS is pleased to announce that Pieter-Jan Vandoren has joined us as Strategic Analyst!

Prior to joining HCSS, Pieter-Jan was an intern at the Centre for Security, Diplomacy and Strategy in Brussels, working on the concept of European strategic autonomy in times of great power competition between the US and China.

Pieter-Jan holds a BSc degree in Business Economics from Hasselt University, Belgium. He obtained MSc’s in International Business and Finance (both cum laude) from the Catholic University of Leuven, Belgium. After graduating Pieter-Jan worked for two years as a consultant at EY Brussels, before following his interest in security studies to the Brussels School Of Governance, where he obtained an MA in Global Security and Strategy (summa cum laude).

His policy brief on European Critical Infrastructure won the 2023 TEPSA Student Contest.

De Strateeg | Frederik Mertens: ‘We steunen nog altijd volledig op de Amerikaanse nucleaire garanties’

Rusland is wereldwijd nog altijd de grootste kernmacht, met bijna zesduizend kernkoppen, gevolgd door de Verenigde Staten met zo’n 5.200. China volgt op ruime afstand met 410 kernkoppen. Dat is overigens nog altijd genoeg om de hele wereld te vernietigen, zegt Han Bouwmeester, brigadegeneraal en hoogleraar aan de Nederlandse Defensie Academie, in BNR’s De Strateeg.

Overigens hadden de Verenigde Staten en Rusland op het hoogtepunt van de Koude Oorlog gezamenlijk de beschikking over meer dan 50.000 actieve kernkoppen, zegt Frederik Mertens, Strategisch analist van HCSS. In die zin gaan we dus eigenlijk ‘al een heel eind de juiste kant uit’. We lijken de komende decennia vast te zitten aan de huidige stevige machtsblokken en dat is niet per se goed nieuws ‘Niemand is op zoek naar een armageddon, zeker niet. Maar er is nog genoeg om eindeloos veel schade aan te richten.’

Twee jaar geleden al waarschuwde secretaris-generaal van de Verenigde Naties Antonio Guterres dat de mensheid vandaag de dag slechts ‘één misverstand, één misrekening’ verwijderd is van totale nucleaire vernietiging. En nu ook de Russische president Vladimir Poetin nogmaals heeft gedreigd met een kernoorlog, lijkt sprake van een serieus gevaar. De machtspolitiek lijkt helemaal terug van weggeweest, zegt brigadegeneraal Han Bouwmeester.

Neoliberale wereldorde

Met het einde van de Koude Oorlog, nu ruim dertig jaar geleden, leek er ruimte te ontstaan voor een neoliberale wereldorde, waarin grote organisaties als de VN in staat zouden zijn om dreigende geopolitieke conflicten in de kiem te smoren, zegt Bouwmeester. Bovendien ontstonden er steeds meer onderlinge economische banden. ‘Dus we gingen ervan uit dat landen er geen behoefte meer aan hadden om conflicten met elkaar te krijgen, omdat die je economisch ook hele erg sterk raken.’

Dat is een repeterend fenomeen, zegt Bouwmeester. ‘Daar geloven we dan weer heel sterk in, maar dertig jaar later blijken links en rechts in de wereld toch allerlei machthebbertjes te zijn opgestaan, die daar niet meer in geloven. Dan krijg je toch veel meer realpolitik daarnaast. Dan gaat machtspolitiek een rol spelen en landen gaan dan kijken of ze daarin andere landen mee kunnen krijgen.’

Steunen op Amerikaanse garantie

Daarbij komt een groeiend nucleair gevaar om de hoek kijken, zegt Mertens. Want naast Rusland, China en de VS is er nog de global South en Europa, met name het Verenigd Koninkrijk en Frankrijk. ‘Ik denk dat we blij mogen zijn dat het VK en Frankrijk in Europa liggen. We steunen nog altijd eerst en vooral op de Amerikaanse garantie. Die is ook voor de VS zelf cruciaal en de meeste specialisten in de VS realiseren zich dat maar al te goed.’

Toenemende twijfel is nog de grootste bedreiging voor de mondiale nucleaire stabiliteit, zegt Mertens. ‘In Zuid-Korea laaide de discussie op of het zich niet nucleair moest bewapenen. En zelfs Japan, tot nu toe het enige slachtoffer van nucleaire wapens, begint deze discussie te voeren. De VS weet heel goed dat Japan morgen een nucleair wapen kan hebben, als het dat wil – en als de VS geen garantie meer geeft. Dat is niet heel complex voor een geavanceerde economie.’

Bron: BNR Nieuwsradio, 10 maart 2024

Hoe groot is het nucleaire gevaar nu echt? Wat kan Europa doen om haar positie als machtsblok te versterken? En zijn wij ons in Nederland bewust van het grote nucleaire gevaar?

Dat hoor je in deze aflevering van De Strateeg van:

  • Frederik Mertens, Strategisch analist van HCSS
  • Han Bouwmeester, brigadegeneraal en hoogleraar aan de Nederlandse Defensie Academie

Bron: BNR Nieuwsradio, 10 maart 2024

Over deze podcast

De Strateeg is een podcast van BNR in samenwerking met het Den Haag Centrum voor Strategische Studies (HCSS). Abonneer je via bnr.nl/destrateeg om geen enkele aflevering te missen.

Host: Paul van Liempt

Redactie: Michaël Roele

De Strateeg: Zijn we een misrekening verwijderd van totale nucleaire vernietiging?

Twee jaar geleden al gaf de secretaris-generaal van de Verenigde Naties Antonio Guterres deze waarschuwing af: ‘De mensheid is vandaag de dag slechts een misverstand; een misrekening verwijderd van totale nucleaire vernietiging’. Anno 2024 lijkt deze uitspraak steeds waarschijnlijker. Vorige week dreigde de Russische President Poetin in zijn jaarlijkse speech voor een kernoorlog. Maar ook Europa staat op scherp. met honderden Amerikaanse, Britse en Franse kernwapens gericht op Rusland.

Hoe groot is het nucleaire gevaar nu echt? Wat kan Europa doen om haar positie als machtsblok te versterken? En zijn wij ons in Nederland bewust van het grote nucleaire gevaar?

Dat ga je horen in deze aflevering van De Strateeg van:

  • Frederik Mertens, Strategisch analist van HCSS
  • Han Bouwmeester, brigadegeneraal en hoogleraar aan de Nederlandse Defensie Academie

Bron: BNR Nieuwsradio, 10 maart 2024

Over deze podcast

De Strateeg is een podcast van BNR in samenwerking met het Den Haag Centrum voor Strategische Studies (HCSS). Abonneer je via bnr.nl/destrateeg om geen enkele aflevering te missen.

Host: Paul van Liempt

Redactie: Michaël Roele

Interview Rob de Wijk | Voedselproductie als wapen in een veranderende wereldorde

RESPECTvee | In deze aflevering gaat Helma Lodders, voorzitter van Vee&Logistiek Nederland, het gesprek aan met Rob de Wijk, hoogleraar Internationale betrekkingen aan Universiteit van Leiden en Founder Den Haag Centrum voor Strategische Studies. De Wijk heeft een uitgesproken mening over de inzet van de Nederlandse voedselproductie als geopolitiek wapen, de export van levend vee naar derde landen en de regeldruk in Nederland en Europa.

Bron: RESPECTvee, 7 maart 2024

Minder regeldrift

“De Europese- en Nederlandse regelgeving moet flexibeler. Met de focus op een vierkante millimeter stukje land naast een bos red je de wereld niet op het gebied van voedselvoorziening. Brancheorganisaties krijgen steeds meer belangstelling voor de brede discussie en dat is hartstikke goed. Regelgeving lossen de problemen niet op. Ook richting politiek moet je aantonen wat de toegevoegde waarde is van jouw sector in het mondiale voedselsysteem. Kom met een breder verhaal waarin de consequenties worden belicht om tot een ander debat te komen”.

Export noodzakelijk

“Het vervoeren van hoogwaardig vee over de wereld is een absolute noodzakelijkheid om hongersnood te voorkomen. Ethisch gezien moet je deze dieren goed behandelen, daarover bestaat geen discussie”.

Voedsel als geopolitiek wapen

“China heeft impact op de wereld. We zijn in grote mate afhankelijk van China voor grondstoffen, maar het land is zelf niet in staat om compleet zelfvoorzienend te worden. Maar 9% van de grond is geschikt voor landbouw en dat areaal wordt elk jaar minder. Daarom zijn ze voor voedsel afhankelijk van het buitenland. De normen voor voedselveiligheid zijn in de Europese Unie hoger. Deze voedselvoorziening en – veiligheid kun je als Europa en Nederland strategisch inzetten als wapen voor geopolitieke onderwerpen en handelsakkoorden”.

Column Rob de Wijk: Kan de Navo ons nog verdedigen?

De grootste oefening sinds de Koude Oorlog is sinds begin dit jaar aan de gang: Steadfast Defender. Want na decennia staat de verdediging van het Navo-gebied weer centraal in Europa. Hoe effectief het bondgenootschap zich kan verdedigen is de vraag. Het ontbreekt aan alles: van wapens tot munitie. De ervaring met het verplaatsen en vechten met grote eenheden van marine, land- en luchtmacht die onder meer vanuit de VS worden aangevoerd, is goeddeels verdwenen.

Tijdens de Koude Oorlog was de hele infrastructuur hierop voorbereid. Bruggen waren ingericht voor zwaar transport, de spoorwegen hadden voldoende wagons en het was duidelijk waar die eenheden moesten vechten. Nu moeten nieuwe afspraken met particuliere vervoerders worden gemaakt. Waar eenheden worden ingezet en hoe, is afhankelijk van defensieplannen die deels nog in ontwikkeling zijn.

Tijdens de Koude Oorlog was de regio langs de oostgrens verdeeld in ‘vakken’ die werden toegewezen aan de landen die daar de verdediging moesten organiseren. Nu is de situatie anders. De Baltische Staten, Poetins belangrijkste doelwit, liggen geïsoleerd van de rest van Europa. Over land zijn ze bereikbaar via de smalle Suwalki-corridor tussen de Russische exclave Kaliningrad en de Russische vazalstaat Belarus. Nu Finland en binnenkort ook Zweden lid van het bondgenootschap zijn, kan de versterking van de Baltische Staten ook via deze landen over zee plaatsvinden.

Grote ommezwaai

In de buurt van de poolcirkel wordt onder meer rekening gehouden met de Russische marine die vanuit het Kola-schiereiland de aanvoerroutes uit de VS kan bedreigen. Ook het zuiden van Europa, zoals Roemenië, wordt versterkt.

De uitdagingen voor de topcommandanten zijn enorm, hoorde ik Navo-opperbevelhebber Cavoli deze week in Polen zeggen. Het belangrijkste signaal dat uitgaat van de oefening, is volgens hem van politieke aard. Poetin moet het niet in zijn hoofd halen een aanval op een van de Navo-landen uit te voeren.

Precies om die reden kwam de Europese Unie deze week met haar strategie voor de defensie-industrie. De grote ommezwaai is dat de EU zich nu bemoeit met spullen die voor de defensie van Europa nodig zijn. Die moeten zo veel mogelijk hier worden gefabriceerd. Zeker na de mega-overwinning van presidentskandidaat Trump deze week tijdens Super Tuesday, is het logisch dat Europa meer aan zichzelf denkt. Maar dit komt wel rijkelijk laat.

Bedwantsen in Parijs

Bovendien gaat het allang niet meer alleen over grote legers die tegenover elkaar staan. Hybride oorlogvoering is nu al in volle gang. Je hoeft alleen maar op X te kijken om te zien dat de hoeveelheid nepnieuws en propaganda ongehoord is en haar effect op de meelopers van Poetin niet mist. Het doel is verdeeldheid in Europa door onrust te zaaien. Daarbij worden zelfs bedwantsen ingezet. Want nu blijkt dat de hysterie daarover in Parijs door Rusland werd aangewakkerd.

Daarnaast wordt rekening gehouden met sabotageacties, bijvoorbeeld van onze windmolens op zee en communicatiekabels die bijvoorbeeld over de zeebodem van Europa naar Amerika lopen. De Noordzee, dus Nederland, is dan het belangrijkste doelwit. Steadfast Defender maakt duidelijk dat we aan een einde van een tijdperk zijn gekomen. De Koude Oorlog duurde ruwweg dertig jaar. Het post-Koude Oorlog tijdperk ook.

Nu volgt een nieuwe periode van confrontatie. Niemand die er blij mee is.

Rob de Wijk, Trouw, 7 maart 2024

Rob de Wijk is hoogleraar internationale relaties en veiligheid aan de Universiteit Leiden en oprichter van het Den Haag Centrum voor Strategische Studies (HCSS). Hij schrijft wekelijks over internationale verhoudingen. Lees zijn columns hier terug.

Terugkijken | Heeft de liberale democratie een toekomst?

Heeft de liberale democratie een toekomst? Sinds de oorlog in Oekraïne lijken de kaarten op het wereldtoneel opnieuw geschud. Rusland trekt zich niets aan van het internationaal recht, China zoekt de grenzen op in de Zuid-Chinese zee, en de Turkse president Erdogan eist een steeds prominentere rol op. De boodschap is duidelijk: de westerse dominantie is niet meer vanzelfsprekend. Ondertussen is de VS steeds meer in zichzelf gekeerd en lukt het Europa maar niet om een vuist te maken.

Wat betekent dit voor de toekomst van de liberale democratie in de wereld? Hoe kijken de grote spelers naar de huidige internationale afspraken over mensenrechten en handel? En welke rol kan Europa spelen in de nieuwe wereldorde?

Historicus prof. Beatrice de Graaf (UU) en HCSS strategisch analist Laura Jasper spraken over deze onderwerpen op 6 maart in Utrecht in een volle zaal voor de Studium Generale serie “Democratie in een wankele wereldorde,” georganiseerd in samenwerking met de faculteiten Sociale Wetenschappen, Bètawetenschappen en Geesteswetenschappen (UU).

Meer info.

Strategische Vragen | De Strateeg: Gaan we hogere investeringen in defensie volhouden?

Europa investeert meer in defensie sinds de oorlog in Oekraïne. In Duitsland ligt deze ‘Zeitenwende’ het gevoeligst. Kunnen we deze investeringen volhouden? En wat doet dit met de positie van Europa?

Dat ga je horen in deze aflevering van De Strateeg van:

– Ton van Loon, voormalig luitenant-generaal en verbonden aan een Duitse denktank en HCSS.

– René Cuperus, Duitsland-expert bij Instituut Clingendael

Bron: BNR Nieuwsradio, 3 maart 2024

Over deze podcast

De Strateeg is een podcast van BNR in samenwerking met het Den Haag Centrum voor Strategische Studies (HCSS). Abonneer je via bnr.nl/destrateeg om geen enkele aflevering te missen.

Host: Paul van Liempt

Redactie: Michaël Roele

Book launch | The Oxford Handbook of Space Security

Space security is a complex assemblage of societal risks and benefits that result from space-based capabilities and is currently in a period of transformation as innovative processes are rapidly changing the underlying assumptions about stability in the space domain. New space-based technologies are emerging at an accelerating rate, and both established and emerging states are actively and openly pursuing weapons to negate other states’ space capabilities. Many states have set up dedicated military space units in order to preemptively counter such threats. In addition, a number of major private companies with a transnational presence are also investing heavily in extraterrestrially-based technology.

The Oxford Handbook of Space Security focuses on the interaction between space technology and international and national security processes from an international relations (IR) theory perspective. Saadia M. Pekkanen and P.J. Blount have gathered a group of key scholars who bring a range of analytical and theoretical IR perspectives to assessing space security. The volume theorizes the development and governance of space security and analyzes the specific pressure points currently challenging that regime. Further, it builds an analytically-eclectic understanding of space security, infused with the theory and practice of IR and advances analysis of key states and regions as well as specific capabilities. Space security is currently in a period of great transition as new technologies are emerging and states openly pursue counterspace capabilities.

Bringing together scholarship from a group of leading experts, this volume explains how these contemporary changes will affect future security in, from, and through space.

HCSS director of research Tim Sweijs and strategic analyst Davis Ellison contributed the chapter “The Next Frontier: Strategic Theory for the Space Domain“:

At the dawn of the Second Space Age, the evolution of space as a warfighting domain is keeping brisk pace alongside its increasing economic and societal importance. A clear understanding of strategic dynamics in space is a necessary prerequisite to enhancing the stability and peaceful uses of space. This chapter proposes concepts that are central to developing a strategic theory for space. It first provides descriptions of both why space is important and why it is different and links these considerations to a rationale for new strategic theorizing. The chapter engages with previous thinking from other military domains, namely land, maritime, and airpower, to probe the literature on strategy and ascertain those most fundamental elements applicable to the space domain. It then offers three foundational concepts for a strategic theory for space: power, access, and command. Finally, it considers theory both in relation to orbital uses for space as well as in the emerging commercial and military uses of cislunar and deep space.

Applying lessons from international relations theory and practice and drawing from a range of social science subfields, the Handbook is a definitive work for scholars who study the topic of space security.

Featuring more than 40 chapters from renowned experts such as Stephen Buono, Aaron Bateman, Setsuko Aoki, Carl Graefe, Raymond Duvall, Wendy Whitman-Cobb, Pavel Luzin, Tai Ming Cheung, Yasuhito Fukushima, Xiaodan Wu, Forrest E. Morgan, Jessica West, Koji Tachibana, Scott Pace, Kevin Pollpeter, Florian Vidal, Rajeswari Pillai Rajagopalan, Šumit Ganguly, Xavier Pasco, Mark Hilborne, Tomas Hrozensky, Mathieu Bataille, Deganit Paikowsky, Samuel Oyewole, Olavo de O. Bittencourt Neto, Jairo Becerra, Prashanth Parameswaran, Su-Mi Lee, Hanbeom Jeong, Matthew Stubbs, Desislava Gancheva, Laura Grego, Larry F. Martinez, Michael Raska, Malcolm Davis, Brad Townsend, Guoyu Wang, Alanna Krowlikowski, Martin Elvis, Mariel Borowitz, Peter L. Hays, James J. Wirtz, Mohamed Amara, Sagee Geetha Sethu, Paul B. Larsen, John J. Klein, Nickolas J. Boensch, Nikola Schmidt, Natália Archinard, James Clay Moltz, Zhou Bo and Wang Guoyu, the volume:

  • provides a comprehensive approach to understanding space security;
  • pushes forward the theory of space security from a variety of international relations perspective;
  • extends the analysis beyond the standard space actors to give a more comprehensive understanding of how a broad cross section of states understand the impact of space on their security;
  • draws on the expertise of a set of scholars who bring a range of analytical and theoretical perspectives to bear on the empirical changes affecting space security;

The Oxford Handbook of Space Security is now available from Oxford University Press, Amazon, Barnes & Noble, WH Smith and other major book retailers.

Paul van Hooft | AI and Nuclear Weapons: Keeping the human in the loop, not only for the decision, but also before the decision

Artificial intelligence has multiple implications for the conduct of warfare, from greater autonomy to increased speed; however, AI also has implications for strategic stability as it pertains to the nuclear balance and to decision-making by nuclear-armed states during crises. AI facilitates intelligence and reconnaissance, as well as accuracy, which changes the first-strike calculus, and during a crisis may remove the human from the loop more than is understood and foreseen. While it is difficult to deny the use of AI, nuclear-armed states need to share common practices and approaches to preventing qualitative arms racing and inadvertent nuclear escalation.

Source: AI and Nuclear Weapons, Atlantisch Perspectief, Februari 2024

Fear of AI: no human in the loop

The most publicly discussed fear of the impact of AI is that it will take the decision to use nuclear weapons out of the hands of humans. Depictions of AI in popular culture, whether Sci-fi movies or Netflix series, conjure images of cold, calculating consciousnesses that decide to do away with the inferior human species that preceded them. Of course, these map on perfectly to the depictions in popular culture of nuclear weapons rapidly and inevitably bringing about the end of the world. Given the ability of nuclear weapons to destroy entire cities in seconds, such fears are not unreasonable, even if difficult to grasp.

The need to keep ‘the human in the loop’ is decidedly uncontroversial, with the nuclear-armed powers largely and openly agreeing to not relinquish control over the final decision to launch nuclear weapons. That reflex among decision-makers of nuclear armed states should not be considered surprising. The decision to launch nuclear weapons is already highly centralized when it comes to humans, with most if not all nuclear-armed powers essentially having leaders as ‘nuclear monarchs’ that can singlehandedly make decisions to launch. Fearing unforeseen escalation, leaders of nuclear-weapons states have been reticent to delegate responsibility to military officers, except during periods of high uncertainty. This reticence also reflects a general military reticence to allow initiative outside of the tactical and operational levels of war, if even that, during conventional war. Leaders prefer to keep their finger on the proverbial button (though it is usually a key). However, the real danger of AI’s role in nuclear strategy is not the automation of the final, catastrophic decision; it is the less obvious, half-hidden integration of AI into processes that assist with the final decision.

The real danger of AI’s role in nuclear strategy is not the automation of the final, catastrophic decision; it is the half-hidden integration of AI into processes that assist with the final decision.

Reliance on AI for decision-making

AI is too often mystified. It can be conceived as computerized systems that can perform tasks that are considered to require human intelligence, including learning, solving problems, and achieving objectives under varying conditions, with varying levels of autonomy and absence of human oversight.[1] They are faster and more reliable than humans when it comes to engaging with massive amounts of data, which gives them advantages not only in commerce – such as identifying patterns in consumer behavior – but also in the military enterprise where speed and information processing can spell the difference between success and failure, and perhaps life and death.

There are four ways, as James Johnson argues, that AI could affect nuclear deterrence and decision-making: (1) command and control; (2) missiles delivery systems; (3) conventional counterforce operations; and (4) early warning and Intelligence, Surveillance, and Reconnaissance (ISR). Regarding ISR, machine learning, and specifically deep learning, could collect, mine, and analyze large volumes of intelligence. This could be visual, radar, sonar, or other – to detect informational patterns and locate specific nuclear delivery systems, whether missile silos, aircraft, mobile launchers, or perhaps even submarines. It could potentially identify patterns in behavior of nuclear-armed adversaries. Moreover, it could allow the sensor systems themselves – for example long-range UAVs – to collect information for longer periods of time. Yet, AI-assisted cybertools could also be used for information-gathering through espionage. While command and control is not the first candidate for the direct use of AI, AI-assisted processes could in turn be used to protect the cyber security of nuclear infrastructure.

AI could increase the precision of nuclear-armed or conventionally armed missiles, whether the individual Multiple Independent Reentry Vehicles (MIRV), or hypersonic weapons, and provide protection against electronic warfare jamming and cyber-attacks, as well as provide endurance to platforms over longer periods of time. Finally, AI could improve conventional counterforce operations, whether the ability to penetrate defended airspaces with manned or unmanned aircraft, or, conversely, improve the detection, tracking, targeting, and interception of traditional air- and missile defenses. Again, AI could not only improve defense against kinetic attacks, but also cyberattacks.[2] The brief overview is deceptive in terms of the impact it could have on international security as relating to nuclear weapons; it suggests the increased efficiency and effectiveness of existing technologies and procedures. To understand that impact, we need to understand the logic of nuclear deterrence and strategic stability more generally.

Strategic stability

The effect of nuclear weapons on international security has been varied; though claims have been made of a ‘nuclear revolution’ that would dampen the risks of great power conflict, the actual effect has not been nearly as obvious. That is specifically the consequence of the first-strike and second-strike logic of nuclear weapons. If both sides in a nuclear-armed rivalry believed nuclear retaliation was unavoidable in case they or their adversary initiated aggression, both would be dissuaded from doing so. However, this vulnerability is hard to accept for leaders of nuclear-armed states. Consequently, they will pursue ‘damage limitation’ policies towards their adversary’s nuclear forces, whether by building capabilities to destroy them first, by improving defenses against them, or by disrupting or destroying the adversary’s decision-making process to launch. In turn, this deteriorates their adversary’s confidence in their own secure second-strike capability with which they would retaliate.[3] This dynamic undermines both facets of strategic stability: first-strike stability and crisis stability.

First-strike stability refers to a situation where neither of two nuclear-armed adversaries believe that one of them has a first-strike advantage to destroy the other’s arsenal before the latter can launch. It is a more structural appraisal of the balance of capabilities between them. The perception of an advantage on the side of the adversary and a vulnerability on one’s own side could lead the other to invest in more or qualitatively different warheads or delivery systems, or change the nuclear posture to launch-on-warning. Such a response makes the initiation of a nuclear exchange more likely. During the Cold War, fears of a declining secure second strike drove both the U.S. and Soviet superpowers to develop arms to find and maintain their own advantage and prevent the other from gaining an advantage. The number of nuclear weapons grew to enormous heights because of these fears, but also qualitative investments in other technologies, from precision to missile defenses to quiet submarines, swelled.

Crisis stability, as paradoxical as the term may seem, denotes a situation where a nuclear-armed state does not escalate a confrontation with an adversary to the nuclear level. The state could escalate because it believes its adversary has already begun a nuclear exchange, or it believes its adversary is attempting to destroy its nuclear arsenal with a conventional or nuclear first strike. The 1962 Cuban Missile Crisis contained an incident where a Soviet submarine armed with a nuclear torpedo that was breaking through the American naval quarantine around Cuba believed it was under deliberate attack by an American surface vessel as part of the beginning of a nuclear exchange. Fortunately, one officer insisted on surfacing first. In 1983, during the period of heightened Soviet-American tensions of the latter Cold War, Soviet early warning satellites seemed to detect a first launch by the United States; it was again one Soviet officer that deemed the data did not fit expectations of what an American attack would look like. Human judgement turned out to be correct in both cases (and other known cases).

The deeply unsettling effect of nuclear weapons encapsulated in strategic stability has thus existed since the beginning of the Cold War and dominated what has been referred to as the first nuclear age, which was marked by the stand-off between the U.S. and Soviet superpowers.[4] During the first nuclear age, the other nuclear-armed states – the UK, France, China, and (undeclared) Israel – had very limited nuclear arsenals compared to those of the superpowers. Both aspects of strategic stability became less important during the second nuclear age, where the major concern was the proliferation of nuclear weapons. The fear particularly focused on the potential acquisition of nuclear weapons by so-called rogue states and non-state actors, especially in the wake of the 9/11 attacks and the subsequent war on terror, which culminated in the invasion of Iraq. For various reasons, the assumptions that underlay deterrence, namely that nuclear-armed states were rational or attempting to be rational, were thought not to apply to the apocalyptic ideologies of terrorist groups. The third nuclear age has made strategic stability relevant again, with the growing number of nuclear-armed states – India, Pakistan, and North Korea – and the increasing Chinese arsenal creating a situation of nuclear multipolarity with risks of overspill between regions, as well as the various emerging disruptive technologies that includes AI.

AI is particularly unsettling to strategic stability during a crisis,because much of its processes are opaque to the end user

AI and strategic stability

AI has the potential to deeply unsettle strategic stability, particularly if humans attach a great deal of confidence to its workings. The description of AI’s integration into the nuclear weapons architecture above suggests that with its improvements in finding targets and the precision with which to destroy them, it could give a first-strike advantage to a nuclear-armed state with which to destroy their adversary’s nuclear arsenal as well as defend against it – or create the perception that they have such an advantage. By bringing together multiple sources of data, AI-assisted data analysis may even improve the ability to find the adversary’s concealed delivery systems such as mobile launchers or perhaps submarines. Automated missile defense could suggest the ability to absorb an initial nuclear attack. Perception matters greatly here, on both sides in a nuclear standoff. AI is particularly unsettling to strategic stability during a crisis because much of its processes are opaque to the end user, if not also the designer. AI-assisted pattern analysis could interpret aggressive intentions or actions, where the limited time horizon during a crisis does not allow for careful scrutiny of the AI’s inputted data, or its process of analyzing it. As humans tend to believe in the ‘objectivity’ of machines, relying on the findings provided by AI could prove particularly psychologically seductive and thus dangerous.

However, there is another dimension to this, namely that AI is only as good as the data it has at its disposal, and there is therefore a real incentive to poison the data available to fool the AI. One could think of this as analogous to the measure-countermeasure competition between radars and radar jammers in the electronic warfare domain. The objective of an adversary would be to create a false positive or a false negative. A false negative would be tricking the adversary’s AI-assisted data analysis to overlook delivery systems, whether silos, mobile launchers, aircraft or submarines, adding another layer to concealment. A false positive would be the reverse, namely tricking the adversary into believing that there are more warheads or more delivery systems than there in fact are. After all, deterrence is about instilling fear that costs of aggression outweigh the benefits; nuclear deterrence is very much about ensuring that a state can still retaliate with nuclear weapons even after being attacked. Nuclear-armed powers could benefit from both approaches. Weaker states that are less confident in their second-strike capability could be interested in trying to poison the data with false positives, triggering arms-race dynamics on the other side. False negatives could create unwarranted confidence on the side that is poisoning the data or signal the preparation for a first strike, but also sudden nasty surprises on the other side.

Great powers might be cautious about the application of these methodologies, as they have every incentive to prevent escalation. However, nihilists or millenarian rogue states or non-state actors – the dominant fear during the second nuclear age – might be perfectly willing to use AI-assisted deep fakes or data poisoning to provoke escalation. Why go through all the risks to acquire nuclear weapons, when you could have superpowers destroy each other for you?

Why go through all the risks to acquire nuclear weapons, when you could have superpowers destroy each other for you?

What is to be done?

Doomsday is not here yet, but caution is needed. The good news is that the United States and the other nuclear-armed great powers are aware of these dangers. Statements on keeping the  human in the loop for any decision to launch are welcome. However, the effort should go much further. While it is difficult and thus unrealistic to imagine states outright rejecting the benefits that AI can bring to the military – and thus the nuclear – enterprise, a better understanding of the implications of where and how AI is integrated into their own systems would ameliorate some of these risks. Efforts are already underway, whether between the United States and China, hosted by the UK together with other G7 states, or the Dutch-Korean initiative. Discussions between great powers and middle powers would thus help improve the governance of these risks.

Footnotes

[1] Laurie A Harris, “Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress,” n.d.

[2] James Johnson, AI and the Bomb: Nuclear Strategy and Risk in the Digital Age (Oxford University Press, 2023), 24–30, https://books.google.com/books?hl=en&lr=&id=lRupEAAAQBAJ&oi=fnd&pg=PP1&dq=ai+bomb&ots=pHjR5Zpvvp&sig=6UhyAiHBOgR9vEX7ReLhhFjsyQI.

[3] Paul Van Hooft and Davis Ellison, “Good Fear, Bad Fear: How European Defence Investments Could Be Leveraged to Restart Arms Control Negotiations with Russia” (The Hague, Netherlands: Hague Centre for Strategic Studies, 2023); Matthew Kroenig, The Logic of American Nuclear Strategy: Why Strategic Superiority Matters (Oxford University Press, 2018); Keir A. Lieber and Daryl G. Press, The Myth of the Nuclear Revolution: Power Politics in the Atomic Age (Cornell University Press, 2020); Brendan Rittenhouse Green, The Revolution That Failed: Nuclear Competition, Arms Control, and the Cold War (Cambridge University Press, 2020)..

[4] Paul Bracken, The Second Nuclear Age: Strategy, Danger, and the New Power Politics (Macmillan, 2012); David A. Cooper, Arms Control for the Third Nuclear Age: Between Disarmament and Armageddon (Georgetown University Press, 2021).