Rising data privacy and security risks in quantum AI systems

Quantum Artificial Intelligence (QAI), which integrates the problem-solving capabilities of AI with the computational power of QC, has long been touted as a transformative force for sectors such as healthcare, finance, cybersecurity, and defense. Yet, according to Billiris, Gill, and Bandara, the very combination of these technologies multiplies the vulnerabilities associated with each of them.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-09-2025 09:33 IST | Created: 29-09-2025 09:33 IST
Rising data privacy and security risks in quantum AI systems
Representative Image. Credit: ChatGPT

Researchers from the University of Technology Sydney have delivered a comprehensive analysis of how the convergence of artificial intelligence (AI) and quantum computing (QC) poses urgent new risks to data privacy and security. Their findings underscore the need for regulators, developers, and policymakers to prepare for vulnerabilities that could erode trust in emerging technologies.

The study, titled “A Taxonomy of Data Risks in AI and Quantum Computing (QAI): A Systematic Review,” systematically examines the landscape of data-related vulnerabilities that arise when AI and QC technologies intersect. Published with the support of the Australian Government’s Research Training Program Scholarship, the research offers both an academic framework and practical tools to address a growing challenge: safeguarding personal and organizational data in an era when traditional protection methods may no longer be enough.

Overlooked risks in the AI–quantum convergence

Quantum Artificial Intelligence (QAI), which integrates the problem-solving capabilities of AI with the computational power of QC, has long been touted as a transformative force for sectors such as healthcare, finance, cybersecurity, and defense. Yet, according to Billiris, Gill, and Bandara, the very combination of these technologies multiplies the vulnerabilities associated with each of them.

AI systems typically rely on large datasets for prediction and decision-making, raising privacy concerns due to the sensitivity of the information they process. Meanwhile, quantum computing, with its ability to break traditional encryption methods such as RSA and AES, poses a direct threat to current data security protocols. The research warns that the interaction between these technologies could amplify weaknesses, creating a compounded risk landscape that existing policies and technologies are ill-equipped to address.

The systematic review covered 67 studies published between 2020 and 2025, drawing from reputable sources including IEEE Xplore, ACM Digital Library, ScienceDirect, NIST, ISACA, Gartner, and ResearchGate. The analysis identified 22 key data risks, which were classified into five major categories: governance and risk management, privacy and security risk assessment, privacy and security controls implementation, user-centric privacy considerations, and continuous monitoring and improvement. Strikingly, 82 percent of these risks were related to privacy, while only 18 percent were directly tied to security gaps.

Despite the growing attention to AI and QC individually, the study found that only four out of the 67 reviewed studies specifically addressed QAI-related risks, highlighting a gap in the literature that leaves many real-world systems unprepared for the compounded challenges.

Weak governance and technical gaps expose organizations

The study sheds light on the fragility of governance structures in keeping pace with fast-evolving technologies. The researchers found that many organizations rely on outdated or inconsistent privacy policies, leaving them vulnerable to compliance breaches as new regulations emerge. This disconnect between existing governance frameworks and the novel demands of AI and QC significantly increases exposure to privacy violations.

Technical deficiencies were equally pronounced. The study revealed that weak or misconfigured access controls remain a high-priority concern, particularly in sensitive sectors like healthcare and finance. These lapses allow unauthorized access to personal data, undermining the security of systems already strained by the complexity of integrating quantum capabilities.

Risk assessment practices also lagged behind. The review noted that many organizations fail to properly evaluate vulnerabilities during the training and deployment of AI models, often overlooking issues such as the inadvertent exposure of sensitive data. This problem is exacerbated by the opacity and complexity of QAI systems, which make it harder to track how data is processed and safeguarded.

Moreover, while the promise of quantum-resilient cryptographic tools such as post-quantum cryptography (PQC) and quantum key distribution (QKD) has generated optimism, the study stresses that these technologies are still in their infancy. Limited standardization and adoption hinder their effectiveness as a countermeasure to quantum-enabled threats.

Building trust through user-centric protections and continuous oversight

In addition to governance and technical safeguards, the research highlights a persistent erosion of user trust in the digital ecosystem. Inadequate consent mechanisms and limited transparency were found to be prevalent across many AI-driven platforms, compounded by the additional complexity introduced by quantum systems. This lack of meaningful user agency not only threatens privacy rights but also undermines the societal legitimacy of QAI technologies.

The authors call attention to the urgent need for continuous monitoring and real-time auditing. The review found that most organizations rely on static policies and reactive governance approaches, which fail to keep up with the rapidly shifting risk landscape. The absence of proactive monitoring mechanisms allows emerging threats to go undetected, creating delays in mitigation and compounding the potential damage.

The taxonomy developed in the study offers a structured way to categorize and address these challenges. By grouping risks across the five identified domains, it provides both researchers and practitioners with a framework for systematic evaluation and mitigation. It also highlights that addressing QAI data risks requires collaboration between technical experts, policymakers, and organizations to align strategies with evolving regulations and ethical standards.

A call for proactive measures in a quantum-driven future

The findings serve as a stark warning for governments, industry leaders, and researchers that existing data protection strategies may not withstand the combined pressures of AI and quantum technologies. The authors argue that trustworthy QAI will depend on timely adoption of stronger governance frameworks, resilient cryptographic techniques, robust access controls, and user-focused safeguards.

For policymakers, the taxonomy provides insight into regulatory gaps that need urgent attention. For developers and organizations, it serves as a practical guide to conduct targeted risk assessments, improve compliance, and strengthen technical defenses against new forms of vulnerabilities.

The risks identified are not static. As QAI technologies evolve, so too will the privacy and security threats they pose. Addressing these challenges will require a proactive, adaptive approach that integrates continuous monitoring, cross-border compliance strategies, and ethical considerations into the heart of data governance.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback