Introduction
At the recent global summit on Responsible AI in the Military Domain (REAIM), India chose to abstain from signing a pledge aimed at regulating the use of artificial intelligence in warfare. Notably, participation in such voluntary commitments has declined compared to earlier summits, signalling a broader geopolitical hesitation in setting limits on military AI. This reluctance reflects a deeper structural dilemma: while AI promises operational superiority, its integration into warfare raises profound ethical, legal, and strategic concerns. The world stands at a moment where technological acceleration is outpacing regulatory consensus.
Why Governing Military AI Is Exceptionally Difficult
AI presents unique governance challenges because of its dual-use character.
-
Civilian applications include Healthcare diagnostics, Logistics optimisation, Data analytics, Climate modelling
-
Military applications include Surveillance systems,Target identification,Autonomous drones, Decision-support systems in combat
The same underlying algorithms and datasets can serve both civilian and military objectives. This dual-use nature creates practical obstacles:
-
It is difficult to track the direction of research and development and Verify compliance with restrictions and distinguish purely civilian AI from military-enhancing AI
-
AI development is often driven by Private firms, Academic institutions, Commercial innovation ecosystems
Moreover, states perceive AI as a decisive strategic advantage. In an era of great-power rivalry, no state wants to constrain its capabilities unilaterally.
Lethal Autonomous Weapons Systems (LAWS)
The most contentious issue within military AI governance is the development of Lethal Autonomous Weapons Systems (LAWS).
-
These are systems capable of:
-
Selecting and engaging targets
-
Operating with varying degrees of autonomy
-
Potentially functioning without direct human intervention
-
-
Discussions under the United Nations Convention on Certain Conventional Weapons have failed to produce consensus.
The central problem lies in definitional ambiguity.
The Definitional Deadlock
There is no internationally agreed definition of:
-
What qualifies as a lethal autonomous weapon.
-
What degree of human control is sufficient to retain accountability.
This ambiguity fuels disagreement:
-
Technologically advanced states often prefer broader or flexible definitions and less restrictive regulatory language
-
States with fewer advanced capabilities tend to advocate Clearer and stricter controls and Stronger legal constraints
Without definitional clarity, drafting legally binding agreements becomes nearly impossible. The result is paralysis rather than progress.
India’s Calculated Position
India’s abstention from signing the REAIM pledge reflects a strategic calculation rather than outright rejection of governance.
India’s concerns include:
-
The absence of sufficient empirical data on military AI deployment.
-
The need to preserve flexibility amid complex regional security dynamics.
-
Avoiding premature legal constraints that could Limit domestic AI research and innovation and Constrain strategic autonomy
India appears to be balancing two priorities:
-
Publicly supporting the principle of “responsible use”.
-
Ensuring its technological and security interests are not compromised.
This position mirrors the caution displayed by many mid-level and emerging technological powers.
Ethical and Strategic Risks
Despite geopolitical hesitation, the risks of unregulated military AI are significant.
-
Delegating lethal decisions to machines raises Accountability gaps, Moral dilemmas about machine judgment, Questions of compliance with international humanitarian law
-
Reduced human oversight may:
-
Lower thresholds for armed engagement
-
Accelerate escalation in conflict scenarios
-
-
Absence of strong norms increases Instability, Risk of miscalculation, Arms race dynamics
Autonomy in warfare challenges traditional concepts of responsibility and command.
Immediate Safeguards: A Pragmatic Approach
Given the current deadlock, binding treaties may be politically unrealistic in the near term. However, incremental safeguards are possible.
A non-binding framework could include:
-
Clear political commitment to avoid AI-linked autonomous systems in nuclear command and control.
-
Voluntary confidence-building measures among states.
-
Data-sharing practices to reduce suspicion.
-
Creation of a risk hierarchy categorising military AI use cases by level of autonomy and lethality.
-
Agreement on maintaining meaningful human oversight in lethal decision-making.
Such steps would not eliminate risks but could prevent the most destabilising applications.
The Way Forward
The path ahead likely lies in a phased approach:
-
Develop a principles-based global framework focused on Accountability,Transparency, Human oversight,Compliance with humanitarian law
-
Encourage dialogue between Scientists, Military experts, Policymakers, Civil society
-
Allow norms to mature gradually before moving toward legally binding instruments.
Premature rigidity could stall cooperation, but indefinite delay risks unchecked escalation.
Conclusion
Military AI sits at the intersection of innovation and existential risk. The declining enthusiasm for voluntary pledges and the definitional deadlock around LAWS reveal a world struggling to regulate a transformative technology amid strategic competition. India’s cautious stance reflects broader global uncertainty rather than isolation. Yet the urgency of guardrails cannot be ignored. Without at least minimal shared principles and confidence-building measures, the rapid militarisation of AI may destabilise global security architecture. The challenge is to design governance that preserves strategic flexibility while ensuring that human judgment remains central in matters of life and death.
