The Double Agents | UK Talent Management for Thought-Leaders

JUST PUBLISHED: History reloaded as AI puts ethics in the firing line

In every era, the arrival of new military technologies has forced states to revisit a recurring set of moral and strategic questions. Commanders and legislators have always asked whether a tool is lawful and whether its use is wise. The introduction of the stirrup, for example, altered mounted warfare during the early Middle Ages because it allowed riders to stabilise themselves during impact and transformed cavalry into a decisive force. The spread of gunpowder changed the conduct of sieges, the formation of states and the hierarchy of military power across Europe and Asia. Mechanised armour and motorised infantry forced governments during the 1930s to revise the role of cavalry, logistics and industrial mobilisation. Radar, early computers and nuclear command-and-control systems created new decision-making environments during the Second World War and the Cold War.

At each historical juncture, the ethical debate entered later, after demonstrations of battlefield potential and after the strategic consequences became clear. Societies that abstained from new capabilities frequently found that other powers pressed ahead, gained initiative and reshaped the terms on which conflict took place. Historians record many examples where abstention placed one side at a political or military disadvantage.

The interwar period demonstrates this pattern with particular clarity. The United States adopted an isolationist foreign policy during the 1920s and 1930s and withdrew from active European security involvement. Britain and France reduced military spending and participated in efforts to constrain naval and air power through treaty frameworks. Public opinion encouraged a view that restraint and diplomatic instruments would avert another crisis similar to 1914. Meanwhile, authoritarian regimes in Germany, Italy and Japan expanded armaments production, revised operational doctrine and integrated new technologies into their armed forces. The result was a pronounced imbalance between intentions in democratic societies and capabilities in revisionist states. When war returned in 1939, democracies possessed legal and ethical arguments in favour of peace, while their adversaries had spent a decade preparing for industrialised conflict. Abstention from armaments did not deliver stability. It lowered the cost of aggression for those prepared to use force and it raised the eventual price of resistance. That historical lesson has shaped strategic thinking ever since, because it illustrated the consequences of allowing a moral instinct to restrain one actor while leaving others unconstrained.

Democracies now face a modern version of that dilemma in relation to artificial intelligence. The pace of development in AI-enabled military systems has accelerated during the past two decades. Since the Wing Loong-1 combat UAV entered service in 2009, for instance, China has overseen a steady expansion in autonomous and semi-autonomous aerial and maritime platforms. Beijing has also signalled its intent to distribute AI throughout the People’s Liberation Army, including command automation, ISR (intelligence, surveillance and reconnaissance) fusion, targeting, electronic warfare and logistics. Russia, meanwhile, has articulated the stakes in its usual direct terms. In September 2017, during a televised address at the start of the Russian school year, President Vladimir Putin stated: “Artificial intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” The closing sentence reveals a geopolitical perspective in which AI superiority supports global leadership. Many other governments have drawn similar conclusions and have invested accordingly.

An initial attempt at establishing shared norms took place in 2024, when 90 governments attended the Responsible AI in the Military Domain summit in Seoul. Some 60 states endorsed a blueprint intended to guide responsible use of AI on the battlefield. Of these, 30 declined to adopt the document, including China, despite having sent official representation. This pattern reflects earlier attempts at arms regulation, in which democratic states pursued legal constraint and accountability while authoritarian states sought freedom of manoeuvre. States that renounce AI capabilities would face adversaries that do not and there is no mechanism to guarantee reciprocity from actors who reject humanitarian norms. The presence of non-state groups with access to improvised weapons systems and digital tools introduces further complexity because such groups do not participate in conventional arms control arrangements. Ethical questions therefore sit inside a larger strategic framework in which abstention carries its own costs and risks.

A structured way to examine the ethical dimension of this transition exists in the form of the Just War tradition. From Cicero in the late Roman Republic through Augustine during the late antique period and Aquinas during the medieval period, the tradition developed methods for reconciling moral duties with the practical reality of armed conflict. During the 20th-century, these ideas informed the laws of armed conflict and the codification of humanitarian law. Three principles continue to guide contemporary analysis. Jus ad Bellum concerns the justice of going to war; Jus in Bello concerns the justice of conduct during war; and Jus post Bellum concerns justice after the cessation of hostilities.

Under Jus ad Bellum, AI-enabled sensor fusion and pattern recognition can improve intelligence and target discrimination at the decision-making stage. These capabilities can help commanders assess proportionality and necessity before engaging in force. Improved intelligence can make less destructive options available, reduce recourse to escalation and support efforts to contain conflict geographically. Under Jus in Bello, the Committee of the International Red Cross has observed that machine learning decision-support systems can enable human decision-makers to comply with international humanitarian law by accelerating the analysis of information relevant to distinction and proportionality. Distinguishing civilians from combatants, identifying protected objects and correlating information across multiple sources become easier when data volumes increase and when analytical tools can process them rapidly. The ICRC has already applied AI tools in operational contexts for tasks such as needs assessment in humanitarian emergencies, the organisation of logistics, medical triage and mine detection. These tools reduce harm and improve the distribution of aid, provided meaningful human control is retained. Under Jus post Bellum, AI assists with documentation, evidence collection, damage mapping and the prioritisation of reconstruction. These functions support accountability, reparations and the orderly transition to peace. Justice after conflict requires information and AI increases both the quantity and speed of relevant information.

Public discussion has started to grapple with these questions. In December, the York University debating forum, the York Dialectic, considered the motion that the use of AI in warfare is ethically justified. The debate, which I took part in, reflected a wider societal conversation about the moral implications of delegating functions once performed by humans to systems that operate at machine speed. Participants applied the Just War framework to examine whether disadvantages associated with AI, such as unintended escalation, hallucination, opaque accountability or automation bias, can be mitigated through doctrine, law and design. The exchange revealed a growing recognition that AI is already embedded in many civilian and humanitarian systems and that military exclusion would require significant policy decisions rather than passive inaction.

A persistent concern relates to the role of the human. Dwight D. Eisenhower once said that weapons are tools and that their significance derives from the purpose for which they are used. Moral agency therefore resides with the user rather than the object. Critics of military AI frequently argue that machines cannot replicate the judgment and intuition displayed by individuals such as Stanislav Petrov, the Soviet officer who in 1983 recognised that an early-warning alert indicating an incoming nuclear attack was false. Petrov’s choice prevented a destructive reaction during a moment of extreme tension. The incident is presented as evidence that human intuition forms an essential safety mechanism. It also shows that early nuclear command-and-control systems already delegated elements of inference to machines. Human decision-making contains gaps and weaknesses. Fatigue, insufficient training, political interference and bias have produced many disastrous outcomes during twentieth and twenty-first century conflicts. AI can reduce some categories of human error in the same way radar once reduced misidentification and computerised fire-control once reduced indiscriminate gunnery.

The central question concerns the placement of the human within the system. Military doctrine now distinguishes between three models. Under a human in the loop model, humans approve or deny lethal or targeting actions. This model applies to scenarios that require the highest degree of ethical caution and legal certainty. Under a human on the loop model, autonomous systems perform tasks, but humans supervise and can intervene. This model applies to wide-area surveillance, cyber defence, electronic warfare and other roles that benefit from rapid machine processing. Under a human out of the loop model, AI handles routine logistics and data manipulation, which reduces fatigue and frees personnel for tasks that require creativity, judgment, diplomacy, training and morale. The area that generates controversy concerns fully autonomous battlefield action. The Lieber Institute recently recommended the development of “command re-entry pathways” to ensure that commanders can reassert authority over lethal autonomous systems. The purpose of this approach is to maintain hierarchical responsibility and legal accountability. This proposal reflects centuries of doctrine in which responsibility for the use of force belongs to human commanders rather than to tools or machines.

The broader historical lesson describes a recurring pattern in that democratic societies cannot rely on moral restraint when other actors seek advantage through technological innovation. Putin’s statement in 2017 outlined the strategic objective that flows from AI superiority. The most ethically responsible response for democratic states is to ensure that AI development conforms to international law, preserves human accountability and upholds humanitarian standards. Or in other words, technology should reflect humanity and humanity should set the rules. The alternative would be a world in which societies that value human rights and legal restraint fall behind societies that do not. That outcome would create a geopolitical environment shaped by coercion rather than law, with consequences that would extend far beyond the battlefield.



Bring expert insight to your publication – contact The Double Agents to commission a contributor.

f
1942 Amsterdam Ave NY (212) 862-3680 chapterone@qodeinteractive.com

    A fixed-fee, full-service talent and entertainment agency
    for built on bold ideas and unconventional thinking