Hidden risks in AI’s dependence on battlefield data
With the rapid integration of AI systems into surveillance, logistics, and combat decision-making, access to accurate, high-quality data can determine operational advantage. Defence policies in NATO and Five Eyes nations have begun to recognise this, with the United States leading in leveraging battlefield data for AI research and development.

A new study published in AI & Society warns that the growing reliance on battlefield data for artificial intelligence development poses profound ethical, legal, and strategic risks. The paper highlights the dangers of over-reliance on private contractors, the erosion of data sovereignty, and the lack of robust governance models in Western defence frameworks.
The research, titled Ethical, legal, and social challenges of data economy in defence: the case of battlefield data, examines how data collected on the battlefield is increasingly treated as a strategic asset in the digital era, underpinning AI-enabled defence capabilities, and calls for defence organisations to rethink how such data are governed, shared, and protected.
Why battlefield data matters in the digital era
The study argues that battlefield data has become the foundation of modern defence strategies. With the rapid integration of AI systems into surveillance, logistics, and combat decision-making, access to accurate, high-quality data can determine operational advantage. Defence policies in NATO and Five Eyes nations have begun to recognise this, with the United States leading in leveraging battlefield data for AI research and development.
Yet, the analysis reveals that while the U.S. has developed partial measures to exploit battlefield data for innovation, its effectiveness is hampered by legal constraints surrounding intellectual property and data access rights. The United Kingdom has introduced frameworks but has not built a fully coherent system, while Germany has offered a more focused approach through projects such as the Future Combat Air System. Other allies emphasise real-time interoperability but neglect long-term reuse of data for AI advancement.
The paper also reviews NATO’s efforts to create a federated data-sharing ecosystem, which could serve as a model for cross-border collaboration. However, the authors stress that the alliance still lacks a clear direction on how shared data can reliably support AI development without compromising sovereignty or ethics.
What challenges threaten responsible data use
The authors identify several pressing challenges. First is privatisation and capability loss. With private contractors often responsible for collecting, storing, and analysing battlefield data, defence organisations risk vendor lock-in, accountability gaps, and the erosion of critical public-sector skills. The study warns that outsourcing too much to industry weakens national control over strategic resources.
Second is the erosion of data sovereignty. Global technology platforms and dual-use companies can influence or constrain national defence decisions by shaping how data are stored, transferred, or accessed. This undermines sovereign control over sensitive operational information and creates vulnerabilities in coalition contexts.
Third are conflicts over intellectual property and innovation. Defence organisations increasingly require access to privately generated data, but these demands often clash with trade secrets and IP rights. Without clear rules and incentives, companies may hesitate to share valuable datasets, slowing innovation and creating legal disputes.
Fourth, the authors highlight privacy and human rights risks. The sheer volume and sensitivity of personal and operational battlefield data make excessive collection a potential hazard. Unless necessary, proportionate, and well-regulated, large-scale data harvesting can lead to violations of international norms and public trust.
Finally, fragmented archives and inconsistent standards present barriers to interoperability. Disparate systems, classification protocols, and storage formats degrade the quality and traceability of data, making it difficult to build reliable AI systems that depend on high-quality training sets.
How governance can rebalance innovation and ethics
To address these risks, the authors propose a new governance model that treats battlefield data as an artificial club good. In this model, data are non-rival but selectively accessible to legitimate members under strict rules. Defence organisations, rather than private contractors, act as club managers, setting membership conditions, regulating access, and ensuring accountability.
This approach, the study argues, would align with obligations under just war principles and ethical frameworks for defence AI, such as ensuring robustness, predictability, and control. It would also reduce dependency on private industry while preserving incentives for innovation.
The governance model includes procedural recommendations:
- Defence agencies should define membership criteria for allies, agencies, and vetted contractors.
- Differentiate between tactical and strategic data, applying tailored safeguards to each.
- Introduce common standards for storage, metadata, security, retention, and disposal.
- Establish oversight bodies to monitor access, audit compliance, and penalise misuse.
- Create mechanisms to resolve disputes over IP, access, and vendor lock-in.
On the technical side, the authors recommend shifting from centralised data lakes to federated data management and data mesh architectures. This would allow domains to curate and manage data “as a product” while maintaining common standards and traceability. Edge computing and active metadata management would ensure higher fidelity, security, and compliance, while preventing practices like “harvest now, analyse later” that can compromise rights and oversight.
- FIRST PUBLISHED IN:
- Devdiscourse