In a move reflecting growing global concerns over the role of artificial intelligence in warfare, Australia's Department of Defence has unveiled a new policy to govern the military's use of AI technologies. Released amid escalating uses of AI in conflicts from the Middle East to Ukraine, the policy seeks to ensure that such tools are deployed responsibly while aligning with international standards. The timing could not be more pertinent, as reports highlight AI's involvement in accelerating targeting decisions that have contributed to rising civilian casualties in ongoing wars.
The policy arrives against a backdrop of heightened international scrutiny. In the Middle East, for instance, the United States has confirmed employing AI to identify potential targets and speed up decision-making processes, a practice that experts link to increased civilian deaths. Similar applications are reportedly underway in conflicts in Iran, Lebanon, Gaza, and Ukraine, underscoring the urgent need for robust governance frameworks. Australia's new guidelines, detailed in a recent Department of Defence document, aim to address these challenges by establishing clear parameters for AI integration across military operations.
At its core, the policy outlines three overarching requirements for the Defence Department's adoption of AI. First, all uses must comply with Australian law and international obligations, including humanitarian laws. Second, AI applications must be underpinned by individual accountability, with considerations for their impacts on people; they should also be explainable, reliable, secure, and designed to mitigate unintended biases and harms. Third, risks must be managed through proportionate control measures, such as rigorous testing, training, and evaluation protocols.
This emphasis on proportionate controls stands out, according to analysts, because AI is not a singular technology but an enabler embedded in diverse military functions—from targeting and logistics to training and maintenance—each presenting unique risk profiles. The policy encompasses a broad spectrum of AI systems, ranging from simple chatbots to cutting-edge 'frontier' general-purpose models. It builds on the Australian government's broader Policy for the Responsible Use of AI in Government, which came into effect in September 2024 but explicitly excluded the defence portfolio and national intelligence community. The new defence-specific policy effectively fills this void.
Despite these foundational principles, the document leaves significant gaps in practical application. It identifies the Defence AI Centre, established earlier in 2024, as the central governance hub for overseeing AI initiatives. However, it provides scant details on how the Army, Navy, Air Force, or entities like the Australian Strategic Capabilities Accelerator will implement these requirements. Testing and evaluation are highlighted as key safeguards, yet the policy offers no specifics on methodologies, particularly in the unpredictable environments of military operations where AI behaviors can prove unreliable.
Implementation challenges are further compounded by the absence of guidance on compliance monitoring, resourcing, or public reporting mechanisms. As one expert analysis notes, 'The policy says little about how the Army, Navy and Air Force – or other defence entities such as the Australian Strategic Capabilities Accelerator – will actually enact its requirements.' Whether additional directives will follow—and if they will be disclosed publicly—remains uncertain, leaving observers to question the pathway from policy to practice.
Australia's approach draws inspiration from its closest allies, particularly the United Kingdom and the United States, reflecting shared commitments among like-minded nations. The UK rolled out its Defence AI Strategy in 2022 and followed with the Dependable AI in Defence directive in 2024. Going further, the UK has appointed 'responsible AI' officers across Ministry of Defence components and issued a progress report in 2025, demonstrating a more structured rollout.
In the US, the Department of Defense adopted AI ethics principles in 2020, supplemented by a detailed implementation strategy in 2022. More recently, in January 2026, the administration announced its AI Strategy for the Department of Defense—note the shift from 'Defense' to 'War' in some references—which pivots toward prioritizing speed and lethality. This strategy mandates 'any lawful use' of AI, potentially diverging from stricter ethical boundaries, and calls for removing barriers to rapid deployment. Critics argue this emphasis on velocity may not always align with ethical imperatives, even if legally permissible.
Australia's policy aligns with these allies on fundamentals: lawful use, human accountability, and proactive risk mitigation. A distinctive feature is its explicit reference to Article 36 of Additional Protocol I to the Geneva Conventions, which requires legal reviews for new weapons, including AI-enabled systems. As the policy states, this mandates 'legal reviews of AI in weapon systems'—a commitment that few nations have formally adopted, signaling Australia's intent to prioritize international humanitarian law.
Yet, differences persist. Unlike the US and UK frameworks, which include detailed implementation roadmaps, Australia's policy is described by observers as reading 'more like a statement of intent.' This variance raises questions about its potential impact on collaborative efforts, such as AUKUS Pillar II, the trilateral partnership between Australia, the UK, and the US focused on accelerating AI and autonomous technologies. It is unclear how these policy disparities might affect procurement, integration, or shared standards under the alliance.
The release of Australia's policy occurs as international efforts to regulate military AI appear to be stalling. Multinational discussions on autonomous weapons systems remain deadlocked, diminishing the prospects for binding global treaties. In this context, national frameworks like Australia's gain added weight, influencing not only domestic practices but also signaling to partners what constitutes acceptable norms. As conflicts worldwide demonstrate, governance is far from an academic exercise; real-world applications in places like Gaza and Ukraine highlight the stakes.
Experts emphasize that while the policy marks a positive development, its true value will hinge on follow-through. Robust implementation measures—encompassing clear testing protocols, accountability structures, and transparent reporting—will be essential to govern the development and deployment of military AI effectively. The Defence AI Centre's role could prove pivotal, but without expanded resources and detailed guidelines, the policy risks remaining aspirational.
Looking ahead, the policy's evolution will likely intersect with broader geopolitical dynamics. As AI's military applications proliferate, Australia's commitment to legal reviews under the Geneva Conventions could position it as a leader among allies, even if its implementation lags. For now, the document serves as a foundational step, but the real test lies in translating principles into actionable safeguards amid an era of rapid technological advancement and persistent global conflicts.
The broader implications extend to ethical debates surrounding AI in warfare. While the policy stresses mitigation of biases and harms, the challenges of ensuring explainability and reliability in high-stakes scenarios persist. International observers will watch closely to see if Australia's framework influences AUKUS collaborations or inspires other nations to adopt similar humanitarian-focused reviews.
In summary, Australia's new military AI policy arrives at a pivotal moment, offering a blueprint for responsible innovation while underscoring the complexities of enforcement. As the world grapples with AI's dual-edged role in modern conflicts, the coming years will reveal whether this policy evolves into a model of effective governance or joins the ranks of well-intentioned but under-realized strategies.
