Consent crisis: Generative AI systems threaten privacy, identity and autonomy

Fairness, particularly distributive justice, is jeopardized as marginalized communities, often overrepresented in public data, bear the greatest privacy risks while gaining the fewest benefits. AI systems built on their data often serve wealthier or more powerful users, amplifying existing socio-economic inequalities. Instances of “manufactured consent,” where individuals comply due to lack of meaningful choice, further exacerbate this injustice.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 04-07-2025 09:17 IST | Created: 04-07-2025 09:17 IST
Consent crisis: Generative AI systems threaten privacy, identity and autonomy
Representative Image. Credit: ChatGPT

A new study issues a clear warning on the obsolescence of traditional consent models in the era of generative artificial intelligence. Titled "Can AI Be Consentful?" and slated for publication by Cambridge University Press, the study asserts that existing ethical and legal standards, built around the idea of informed consent, are insufficient for managing the downstream consequences of generative AI systems trained on personal data.

As synthetic outputs become more pervasive, the authors identify a growing “consent gap” that threatens individual autonomy, identity rights, and the broader framework of responsible AI governance.

Why is traditional consent failing in AI contexts?

The study outlines three primary challenges to the effectiveness of consent within AI systems: the scope problem, the temporality problem, and the autonomy trap.

The scope problem arises from the inability of individuals to foresee or control how their data will be used once ingested into generative models. AI systems are increasingly trained on massive datasets scraped from public sources, often without the subject's knowledge. Even when individuals give explicit permission for their data to be used for specific purposes, generative AI can produce infinite variations that exceed the bounds of what was originally agreed upon. For example, a voice actor might consent to recordings for audiobooks but later find synthetic replicas of their voice being used in political ads or hate speech.

The temporality problem relates to the fact that consent, once given, cannot be effectively withdrawn. AI models trained on personal data cannot easily be purged of that influence, even if the individual later retracts their consent. Techniques such as “machine unlearning” or “sharded training” are being explored but remain technically limited and costly at scale. As AI systems continuously evolve, fine-tune, and spawn derivatives, the consent given at a single moment in time loses relevance, especially as the outputs morph beyond the original data's intent or context.

The autonomy trap highlights the paradox where consent, traditionally seen as an exercise of personal autonomy, can instead facilitate future restrictions on that very autonomy. Individuals may unintentionally authorize representations of themselves, synthetic voices, images, or behaviors, that later shape their public identity or limit their opportunities. In effect, initial consent becomes the enabler of long-term erosion of self-determination, especially in algorithmically-mediated environments like social media, hiring platforms, or financial services.

How does the consent gap intersect with broader responsible AI principles?

The authors extend the discussion beyond the limitations of consent alone, examining how the consent gap undermines other foundational principles of responsible AI: fairness, transparency, accountability, and autonomy.

Fairness, particularly distributive justice, is jeopardized as marginalized communities, often overrepresented in public data, bear the greatest privacy risks while gaining the fewest benefits. AI systems built on their data often serve wealthier or more powerful users, amplifying existing socio-economic inequalities. Instances of “manufactured consent,” where individuals comply due to lack of meaningful choice, further exacerbate this injustice.

Transparency, a prerequisite for informed consent, becomes nearly impossible to achieve due to the complexity and opacity of advanced AI models. The authors refer to a “transparency paradox”: the more sophisticated and capable an AI system becomes, the harder it is to explain or predict its outputs. This makes it infeasible to provide users with a clear understanding of how their data might be used, thus nullifying the conditions for specific, informed, and unambiguous consent.

Accountability is similarly diffused. In traditional frameworks, the entity that receives consent is responsible for its terms. However, AI development often involves multiple actors, data collectors, model trainers, platform providers, spread across jurisdictions. This fragmentation makes it difficult to assign responsibility when outputs derived from consented data cause harm. Regulatory definitions such as “controller” and “processor” under the GDPR struggle to account for the layered lifecycle of AI systems.

Lastly, autonomy, arguably the cornerstone of consent, faces novel threats in generative AI environments. Representational autonomy is compromised when synthetic versions of a person circulate without their involvement or knowledge. Informational autonomy suffers as AI systems draw unexpected and sensitive inferences from benign data points. Decisional autonomy is limited when algorithmic profiles shape others’ perceptions and gatekeeping decisions, such as loan approvals or employment opportunities.

What changes are needed to make consent functional in AI systems?

The study clearly states that a fundamental rethinking of consent is necessary, both ethically and legally. The authors suggest that rather than relying solely on traditional consent, governance frameworks must integrate alternative mechanisms and shared responsibilities across the AI ecosystem.

First, technical solutions such as differential privacy, data provenance tracking, and machine unlearning must be further developed and integrated at scale. These tools can help mitigate the uncontrolled spread and reuse of data but are insufficient on their own.

Next up, regulatory frameworks need to move beyond self-regulatory and opt-in models. Legal regimes such as the GDPR, U.S. AI Bill of Rights, and China’s AI regulations provide useful benchmarks but still fall short when faced with the systemic challenges of generative content. The European Union's push for simplification in the name of competitiveness raises concerns about eroding protections.

Third, the study emphasizes the need to shift responsibility from individuals to institutions. Developers, deployers, and policymakers must be held accountable for ensuring that systems are designed with privacy and consent defaults, rather than requiring individuals to constantly navigate complex consent interfaces. For example, Apple’s HealthKit and ResearchKit frameworks are cited as models that balance consent, user control, and technological capability through a transparent and explicit permission structure.

In addition, the authors argue for an ethical shift from viewing consent as an expression of individual autonomy to a “waiver of normative expectations” - a concept that focuses less on personal choice and more on setting clear, enforceable boundaries for acceptable AI behavior. This approach could help align legal frameworks with the operational realities of AI technologies, where personalization and representation transcend individual oversight.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback