Who decides limits of innovation: Dual-use AI raises ethical red flags
Ethical prompting strategies would include structured alerts, transparency protocols, and design choices that restrict or flag potential misuse. Instead of being a neutral input-output mechanism, prompting would actively shape the boundaries of how technology is applied across sectors. For instance, a system designed to assist a person with impaired mobility could include built-in safeguards that prevent its adaptation for hostile tracking or coercive monitoring.

Artificial intelligence tools designed to support vulnerable populations could also be turned into instruments of surveillance, coercion, or military command, according to new research published in Frontiers in Artificial Intelligence. The study titled Ethical Prompting: Toward Strategies for Rapid and Inclusive Assistance in Dual-Use AI Systems warns that the dual-use nature of assistive technologies demands urgent ethical safeguards.
The authors argue that prompting must be reimagined not only as a technical input mechanism but as an ethical framework to prevent harmful reappropriation.
How do dual-use risks emerge in assistive AI?
The study examines the inherent dual-use dilemma of AI technologies. Systems developed for inclusion, accessibility, and support in civilian life often share features that make them attractive for military or restrictive applications. For example, auditory, haptic, text-based, and visual alerts that assist people with disabilities in navigating environments can be repurposed to deliver real-time instructions in combat or surveillance operations.
The authors describe how this transition happens through deployment bias. This form of bias does not mean a system malfunctions; rather, it performs exactly as designed but in a context radically different from its intended purpose. Biometric data used to improve accessibility, for instance, could later be exploited for discriminatory practices or to enable precise targeting in conflict zones.
The researchers stress that the same features that make assistive tools powerful, real-time feedback, sensor integration, and adaptability, also make them highly transferable to defense and coercive environments. This, they argue, calls for a proactive ethical response before these systems become embedded in both care and control infrastructures.
What is ethical prompting and why does it matter?
To counter these risks, the paper proposes ethical prompting - a design and governance strategy that integrates safeguards directly into the way AI systems are prompted, trained, and deployed. Prompting, typically understood as the input method for AI systems, is reframed here as a moral checkpoint where designers, developers, and policymakers can embed protections.
Ethical prompting strategies would include structured alerts, transparency protocols, and design choices that restrict or flag potential misuse. Instead of being a neutral input-output mechanism, prompting would actively shape the boundaries of how technology is applied across sectors. For instance, a system designed to assist a person with impaired mobility could include built-in safeguards that prevent its adaptation for hostile tracking or coercive monitoring.
The authors argue that embedding ethics at the level of prompting is vital because it is at this stage that AI systems interface most directly with human users. Unlike downstream regulation, which often arrives too late, ethical prompting ensures that protections are part of the system’s foundation.
What governance and design changes are needed?
The study also highlights the broader implications for AI governance. It calls for frameworks that balance rapid assistance with inclusivity and security, ensuring that dual-use systems protect rather than endanger vulnerable populations. Policymakers, researchers, and industry leaders are urged to move beyond viewing prompting as a purely technical task and instead treat it as a mechanism for ethical control.
The authors emphasize three priorities. First, inclusive design, ensuring that systems meet the needs of people with disabilities without creating backdoors for misuse. Second, transparent governance, where the potential dual-use risks of each system are openly acknowledged and monitored. Third, proactive safeguards, including cross-sector cooperation to prevent technologies intended for care from being redirected to defense or coercion.
These measures, the researchers argue, will be critical as AI systems become more embedded in daily life and more attractive for repurposing in security contexts. Without them, the benefits of assistive technologies could be overshadowed by their exploitation in conflict or authoritarian settings.
- FIRST PUBLISHED IN:
- Devdiscourse