As artificial intelligence expands across security environments, autonomous systems are being adopted faster than the institutions designed to govern them. A widening gap between operational capability and institutional control is emerging as a central challenge. As states integrate these systems into military operations, existing frameworks such as legal review, procurement oversight and doctrine are not keeping pace.
This gap is most pronounced in the Global South and other emerging security environments, where states rely on externally developed technologies they cannot fully audit or regulate. In these contexts, the expansion of capability without corresponding governance capacity creates structural vulnerabilities and increases the risk of miscalculation and unintended escalation. As a result, systems are deployed in conditions where oversight and control mechanisms remain incomplete.
Analysis of recent conflicts reveals a persistent misalignment where operational practices evolve in real time while oversight remains reactive. This article examines these dynamics and offers specific policy recommendations to address that governance lag. It argues that oversight must be treated as a continuous process embedded into the entire life cycle of a system.
To support long term stability, states need to build a minimum level of governance capacity before expanding the use of autonomous systems. International efforts also need to move from general principles toward practical implementation. The stability of future security environments will depend on whether states can retain control as these systems evolve.
The Problem: A Governance Gap in Autonomous Systems
The integration of autonomous systems into military operations is typically seen as a technological shift. In practice, the challenge lies in how institutions respond. The pace of deployment is moving faster than political and institutional systems can adapt. This dynamic reflects a broader difficulty in managing military artificial intelligence (AI) under conditions of sustained technological acceleration and growing geopolitical tension.
Across NATO, the European Union, and Indo-Pacific actors, artificial intelligence is increasingly treated as a general-purpose capability with wide-ranging defence applications. NATO’s AI Strategy emphasises responsible use, interoperability, and human centric approaches (NATO 2021). The European Union’s AI Act introduces a risk based framework for high impact systems, including those relevant to security (European Commission 2021). Japan’s 2022 defence white paper also highlights the growing operational role of AI-enabled capabilities (Japan MOD 2022). Together, these initiatives reflect broad recognition of the strategic importance of AI.
This recognition has not translated into coherent governance. Military organisations are adopting these systems through rapid experimentation, procurement cycles, and operational pressure. Institutional responses remain slower, shaped by legal review, political consensus, and coordination across agencies. Recent conflicts, including Russia’s war in Ukraine, show how this dynamic unfolds in practice. Drone systems have been deployed and adapted in real time, while formal doctrine and oversight mechanisms continue to lag behind operational use.
This gap reflects more than institutional friction. It points to a disconnect between awareness and action. Concerns about loss of control, misuse, and systemic instability are well established in policy and industry discussions (Goldstein 2026). However, these concerns have not produced proportional changes in governance structures. Responses remain fragmented and reactive, often introduced after systems are already in use.
Scholars have long warned about this dynamic. Horowitz (2018) shows that the diffusion of military innovation is shaped by institutional capacity, not just access to technology. Scharre (2018) highlights how increasing autonomy complicates human control, predictability, and escalation management. These patterns are now visible in practice, as states integrate systems faster than they can define how they should be governed.
The result is a persistent governance lag. Autonomous systems are being deployed in environments where rules are incomplete, inconsistently applied, or weakly enforced. This is no longer a theoretical concern. It is an operational condition, shaped by the widening gap between what systems can do and what institutions can realistically manage.
Why It Matters: Strategic and Security Implications
Governance lag has direct consequences for stability. Autonomous systems are entering operational environments without a shared understanding of their limits. In practical terms, this means that systems may be deployed in environments where no actor fully understands the limits of their behavior, increasing the probability of miscalculation through systemic opacity. The challenge is less about technical failure than about maintaining control under accelerated conditions.
Russia’s war in Ukraine serves as a primary example of this "governance lag," where innovation occurs from the bottom up, bypassing traditional procurement cycles. The integration of battlefield management platforms like Delta has enabled the fusion of satellite imagery, drone feeds, and sensor data into a single interface. This has effectively compressed the "kill chain" from minutes to seconds, forcing commanders to rely on algorithmic outputs under extreme pressure (Klysz, 2025). As noted by Scharre (2018), this acceleration erodes the space for human deliberation, creating a "use it or lose it" dynamic where tactical pauses for verification become liabilities.
Furthermore, accountability becomes opaque through the rapid adoption of dual-use commercial technologies. The deployment of software such as Clearview AI for identifying enemy personnel and managing intelligence at checkpoints demonstrates how tools are integrated into security environments long before ethical or legal frameworks are established (Rakhmetov and Murzagulova, 2025). While the European Commission (2021) calls for transparency, the "black box" nature of AI makes it difficult to trace battlefield decisions back to a specific person.
These dynamics reinforce existing strategic asymmetries. Major powers are integrating AI into long-term defence strategies (State Council of the People’s Republic of China 2017), while many other states adopt these capabilities through rapid procurement without equivalent governance capacity. As UNIDIR (2024) highlights, this uneven development allows a limited number of actors to shape emerging norms, while others operate systems they cannot fully audit or control.
The Governance Gap in the Global South: Systemic Risks and Strategic Asymmetries
The gap between technological adoption and institutional oversight is most visible in the Global South. In many emerging security environments, the challenge lies in the speed at which systems are integrated. Autonomous capabilities are often acquired through procurement, foreign partnerships, or technology transfers, with limited domestic development.
This creates a pattern of dependency that goes beyond hardware. Access to software updates, data, and technical support is frequently controlled by external providers. As a result, transparency is reduced and states have less visibility over how these systems operate. UNIDIR (2024) highlights how gaps in procurement oversight, legal review, and institutional coordination make this problem more acute. In practice, systems are often deployed before the structures needed to govern them are fully in place.
A form of governance asymmetry emerges under these conditions. Recent conflicts show how this dynamic unfolds in practice. In Libya, Turkish-made Kargu-2 loitering munitions were reportedly used in autonomous targeting operations within a fragmented security environment that lacked clear oversight mechanisms (United Nations Security Council, 2021). In the 2020 Nagorno-Karabakh conflict, Azerbaijan’s rapid use of Israeli and Turkish drone systems produced immediate battlefield advantages, while regulatory and diplomatic responses lagged behind (CSIS, 2020). In both cases, operational use evolved faster than the frameworks designed to manage it.
Similar patterns are visible elsewhere. In Ethiopia’s Tigray conflict, the use of drone systems from multiple external suppliers created overlapping chains of responsibility. Without a unified institutional framework, attribution became difficult and accountability remained unclear (Human Rights Watch 2022). The challenge extends beyond how these systems are used to who is able to control and evaluate their use over time.
The gap between capability and control has direct strategic consequences. As autonomous systems spread, uncertainty increases. Escalation risks become harder to anticipate, and crisis management becomes more complex. These systems are embedded in broader networks that include data infrastructure, supply chains, and external providers. As Helbing (2013) explains, failures within interconnected systems can propagate across domains, amplifying instability in fragile environments.
Over time, governance itself becomes part of strategic competition. States that develop institutional capacity alongside technological adoption are better positioned to manage risk and maintain control. Others remain exposed to external influence and internal fragmentation. As Christopher Payne (2021) notes, the risks associated with AI in warfare are shaped as much by institutional and political conditions as by the technology itself.
Technological change is moving faster than the institutions meant to govern it. Where governance remains secondary to procurement, strategic control weakens and risks become harder to contain over time.
Policy Recommendations
Governance should be treated as a continuous process spanning the full life cycle of autonomous systems. Political and legal oversight needs to be embedded into procurement and early deployment, so that accountability mechanisms are in place before these systems reach operational contexts.
For states in emerging security environments, sequencing is critical. Governance capacity should come before the expansion of autonomous capabilities. This includes the ability to conduct legal reviews, coordinate across institutions, and maintain meaningful human oversight in practice. Without these foundations, rapid adoption can introduce risks that are difficult to manage over time.
At the international level, the focus should move toward implementation. While the REAIM Summits (2023, 2024) have contributed to building political awareness, the next step is to translate these commitments into practical tools. Developing model governance frameworks, legal review templates, and targeted capacity building programmes would help support states with more limited institutional resources and strengthen the link between principles and practice.
Autonomous systems should also be understood as part of a broader governance challenge. Effective oversight depends on coordination across defence policy, data governance, cybersecurity, and infrastructure resilience. Treating these systems in isolation risks reinforcing fragmented oversight and limiting the ability to sustain control over time.
Conclusion
Artificial intelligence and autonomous systems are reshaping military operations and placing increasing pressure on existing governance frameworks. The pace of technological change is testing the ability of institutions to maintain effective oversight and control. As these systems spread, the gap between deployment and governance is becoming an increasingly important factor in the stability of future security environments. This gap is not evenly distributed. It is most pronounced in contexts where states rely on externally developed technologies and operate systems they cannot fully audit or understand. When capabilities expand faster than the institutions designed to manage them, risks become more difficult to anticipate and contain. Over time, this creates structural vulnerabilities that extend beyond the systems themselves. Addressing this challenge requires treating governance as a central component of security policy. Aligning oversight with the pace of technological adoption will be essential to maintaining control and managing risk as these systems continue to evolve. In the end, the stability of future security environments will depend on the alignment of how fast our systems can think and how effectively we can still lead them.
References
Center for Strategic and International Studies (CSIS), 2020. The Nagorno-Karabakh Drone War: Lessons for the Future of Combat. [online]. Washington, D.C.: CSIS. Available from: https://www.csis.org/analysis/air-and-missile-war-nagorno-karabakh-lessons-future-strike-and-defense [Accessed 13 April 2026].
Center for Security and Emerging Technology (CSET), 2020. Responsible and Ethical Military Artificial Intelligence: Perspectives and Practices. [online]. Washington, D.C.: Georgetown University. Available from: https://cset.georgetown.edu/publication/responsible-and-ethical-military-artificial-intelligence/ [Accessed 13 April 2026].
European Commission, 2021. Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). [online]. Brussels: European Commission. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 [Accessed 13 April 2026].
Goldstein, G., 2026. Artificial Intelligence Is Facing a Crisis of Control—and the Industry Knows It. [online]. Council on Foreign Relations. Available from: https://www.cfr.org/articles/artificial-intelligence-is-facing-a-crisis-of-control-and-the-industry-knows-it [Accessed 13 April 2026].
Helbing, D., 2013. Globally networked risks and how to respond. Nature, 497(7447), pp. 51–59. Available from: https://www.nature.com/articles/nature12047 [Accessed 13 April 2026].
Horowitz, M. C., 2018. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton: Princeton University Press.
Human Rights Watch, 2022. Ethiopia: Drone Strikes Kill, Displace Civilians. [online]. New York: Human Rights Watch. Available from: https://www.hrw.org/news/2022/03/24/ethiopia-drone-strikes-kill-displace-civilians [Accessed 13 April 2026].
Japan Ministry of Defense, 2022. Defense of Japan 2022. [online]. Tokyo: Ministry of Defense. Available from: https://www.mod.go.jp/en/publ/w_paper/wp2022/DOJ2022_EN_Full.pdf [Accessed 13 April 2026].
Klysz, J., 2025. Russia’s War in Ukraine: Drone-Centric Warfare. [online]. Tallinn: International Centre for Defence and Security (ICDS). Available from: https://icds.ee/en/russias-war-in-ukraine-drone-centric-warfare/ [Accessed 13 April 2026].
NATO, 2021. Artificial Intelligence Strategy. [online]. Brussels: North Atlantic Treaty Organization. Available from: https://www.nato.int/cps/en/natohq/official_texts_187617.htm [Accessed 13 April 2026].
Payne, K., 2021. I, Warbot: The Dawn of Artificially Intelligent Conflict. London: Hurst & Company.
Rakhmetov, B. and Murzagulova, K., 2025. Artificial Intelligence in warfare: The case of the Russia–Ukraine war. Journal of Strategic Security, 18(4), pp. 64–77. Available from: https://digitalcommons.usf.edu/jss/vol18/iss4/5/ [Accessed 13 April 2026].
Scharre, P., 2018. Army of None: Autonomous Weapons and the Future of War. New York: W.W. Norton & Company.
State Council of the People’s Republic of China, 2017. New Generation Artificial Intelligence Development Plan. [online]. Beijing: State Council. Available from: http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm [Accessed 13 April 2026].
United Nations Institute for Disarmament Research (UNIDIR), 2024. Governance of Artificial Intelligence in the Military Domain: A Multi-Stakeholder Perspective. [online]. Geneva: UNIDIR. Available from: https://unidir.org/publication/governance-of-artificial-intelligence-in-the-military-domain [Accessed 13 April 2026].
United Nations Security Council, 2021. Final report of the Panel of Experts on Libya established pursuant to resolution 1973 (2011). S/2021/229. [online]. New York: United Nations. Available from: https://undocs.org/S/2021/229 [Accessed 13 April 2026].
About the Author
Angel Ortega Gonzalez is a Panamanian lawyer and policy analyst specialising in security governance, critical infrastructure, and emerging technologies. He holds a Master of Public Administration (MPA) from Cornell University as a Fulbright Scholar and has served as a policy and legal advisor at the National Assembly and the Ministry of Public Security of Panama. His work focuses on systemic resilience and governance challenges in complex security environments.
Contact: angelortega.abogado@gmail.com | LinkedIn: linkedin.com/in/angelortegag


