AI can both empower and erode democratic governance: Here's how
AI systems also support an informed citizenry by enabling access to accurate, personalized information via chatbots and natural language tools. This can counteract misinformation and deepen civic engagement. The study notes that these systems foster transparency and accountability by identifying biases in decision-making and tracking policy implementation in real time, empowering citizens to demand fair governance.

A groundbreaking research study has introduced a dual-framework analysis to assess how artificial intelligence (AI) systems both strengthen and threaten democracy. Titled “Aligning Trustworthy AI with Democracy: A Dual Taxonomy of Opportunities and Risks” and published, the study appears on arXiv and represents the first comprehensive taxonomy that evaluates the democratic implications of AI through both positive and negative lenses.
Drawing from over 100 peer-reviewed publications and grounded in the European Union's Trustworthy AI framework, the paper proposes two analytical taxonomies: the “AI’s Positive Impact to Democracy” (AIPD) and “AI Risks to Democracy” (AIRD). These frameworks identify how AI can promote democratic ideals such as participation, transparency, and accountability, while also outlining risks like manipulation, authoritarian reinforcement, and the erosion of public trust.
How can AI strengthen democratic institutions?
The study’s AIPD taxonomy outlines seven major opportunities for AI to bolster democratic governance. These include increasing efficiency in public administration, enhancing accessibility to political participation, and improving security through anomaly detection and fraud prevention.
AI systems also support an informed citizenry by enabling access to accurate, personalized information via chatbots and natural language tools. This can counteract misinformation and deepen civic engagement. The study notes that these systems foster transparency and accountability by identifying biases in decision-making and tracking policy implementation in real time, empowering citizens to demand fair governance.
Crucially, AI also promotes inclusive global democratic dialogue through multilingual tools that bridge communication gaps. It supports evidence-based policymaking by synthesizing large datasets to predict policy impacts and measure public sentiment, ensuring that decisions align with real-world needs and foster public trust.
These contributions demonstrate that when designed ethically and deployed transparently, AI can serve as a transformative force in strengthening democratic legitimacy and resilience.
What are the primary threats AI poses to democracy?
The AIRD taxonomy warns that AI introduces seven major categories of democratic risk. Among the most urgent is the subversion of autonomy. AI systems, especially those optimized for engagement, manipulate users’ behavior through algorithmic personalization, reducing their exposure to diverse perspectives and weakening independent thought. This manipulation diminishes informed decision-making, a core component of democratic participation.
Another pressing risk is algorithmic unfairness. AI systems trained on biased datasets may reinforce structural discrimination in areas like law enforcement, hiring, and education. These biases undermine democratic principles of equality and fraternity, especially when opaque models are deployed without accountability.
Power asymmetries are further amplified by AI as control over data and infrastructure becomes concentrated among a few large tech corporations and governments. This weakens civic oversight and fosters technocratic dominance, especially in the absence of transparency. Additionally, AI enables authoritarian regimes to surveil populations, control narratives, and suppress dissent through facial recognition, predictive policing, and deepfake propaganda.
AI also undermines public discourse. Emotionally manipulative recommender systems favor polarizing content, promoting echo chambers and weakening consensus. This degrades civic dialogue and fosters division. The study further warns that these dynamics lead to a broad erosion of trust, not only in institutions and media, but also in citizens’ ability to discern truth.
Each risk is linked to specific democratic principles, freedom, equality, the rule of law, fraternity, and tolerance, and is analyzed through causality frameworks that trace how AI’s operational features produce democratic harm.
What frameworks can mitigate AI’s democratic risks?
To navigate the duality of AI’s democratic promise and peril, the study maps each AIRD risk category to the European Commission’s seven requirements for Trustworthy AI. These include human agency and oversight, robustness and safety, privacy and data governance, transparency, fairness, societal and environmental well-being, and accountability.
For example, the subversion of autonomy can be mitigated by ensuring AI explainability and human oversight, while fairness can be reinforced by actively de-biasing models and promoting diversity in training datasets. Power asymmetries can be addressed through accountability mechanisms and privacy protections, and the erosion of public discourse can be counteracted with transparency portals and inclusive information access strategies.
The authors identify transparency and societal well-being as the most transversal requirements, appearing across five of the seven AIRD risk categories. This highlights the need for governance models that prioritize open systems, public auditing, and equitable participation in AI design and deployment.
The framework is designed to be globally adaptable. While rooted in EU regulatory norms, it can be recalibrated to fit different political systems, legal traditions, and institutional capacities. For Global South democracies, which face unique data sovereignty and resource limitations, the authors recommend embedding contextual resilience and participatory design into AI governance mechanisms.
The paper stresses that the risks associated with LLMs are especially concerning. LLMs contribute to misinformation, reinforce bias, and concentrate control in private hands, while their opaque inner workings complicate efforts to enforce accountability. However, when responsibly deployed, LLMs can also enhance legal transparency, multilingual education, and civic engagement.
- FIRST PUBLISHED IN:
- Devdiscourse