Trustworthy AI in the workplace? EU legislation falls dangerously short

The study places the forthcoming EU Artificial Intelligence Act at the center of Europe’s effort to control the rapid expansion of AI in employment contexts. As AI increasingly determines who is hired, how workers are evaluated, and what tasks they perform, the regulation introduces a “risk-based approach” to address potential harm. Under this framework, workplace AI systems such as automated recruitment platforms and algorithmic management tools may be classified as “high-risk,” subjecting them to stricter compliance obligations.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-06-2025 09:18 IST | Created: 23-06-2025 09:18 IST
Trustworthy AI in the workplace? EU legislation falls dangerously short
Representative Image. Credit: ChatGPT

A major new study lays out the legal and institutional path for ensuring that artificial intelligence technologies in European workplaces are aligned with democratic values, human dignity, and worker protection. As Europe accelerates its digital transition, researchers have outlined the urgent need to reimagine how workplace AI systems are governed—highlighting both the promise and the risks of their widespread adoption.

Published in Sustainability under the title Trustworthy and Human-Centric: The New Governance of Workplace AI Technologies Under the EU’s AI Act”, the study examines how the European Union’s regulatory architecture must evolve to keep pace with the technological transformations reshaping labor markets. Anchored in legal theory and empirical observations, the research argues that ensuring trustworthiness in workplace AI requires not just technical robustness, but democratic oversight, social inclusion, and institutional redesign.

What does the EU AI Act mean for workplace AI systems?

The study places the forthcoming EU Artificial Intelligence Act at the center of Europe’s effort to control the rapid expansion of AI in employment contexts. As AI increasingly determines who is hired, how workers are evaluated, and what tasks they perform, the regulation introduces a “risk-based approach” to address potential harm. Under this framework, workplace AI systems such as automated recruitment platforms and algorithmic management tools may be classified as “high-risk,” subjecting them to stricter compliance obligations.

The legal analysis underscores that this risk-tiering mechanism, while an important start, is insufficient in isolation. Existing labor and data protection laws, including the General Data Protection Regulation (GDPR), the Charter of Fundamental Rights of the EU, and the European Social Pillar, provide a partial shield against harm. However, they often fall short when confronting opaque algorithmic decision-making or asymmetries of power between employers and workers.

The study argues that the governance of AI at work must be redefined not merely as a technical challenge, but as a fundamentally political one. Rather than relying solely on ex-post remedies, such as complaints and litigation, the proposed model calls for proactive oversight mechanisms to be embedded in institutional and regulatory design.

How can governance ensure AI remains human-centric?

Drawing on the EU’s commitment to trustworthy and human-centric AI, the authors advocate for a new governance model grounded in democratic control, procedural legitimacy, and normative anchoring. This means moving beyond a minimalist compliance framework toward a more participatory and deliberative system of oversight.

One of the core arguments of the study is that trust in AI systems cannot be manufactured through transparency alone. While disclosures and documentation, like model explainability or data provenance, are valuable, they are not a substitute for structural safeguards. In employment settings, where workers often lack negotiating power, governance mechanisms must go further to ensure fairness, autonomy, and dignity.

To this end, the research outlines key institutional innovations. These include the formation of workplace AI oversight committees, joint worker-management AI audits, and participatory design processes. By incorporating workers and their representatives into governance procedures, these mechanisms aim to bridge the gap between formal rights and practical empowerment.

The study also highlights the need for a shift in responsibility: moving from individual workers having to challenge algorithmic decisions to institutions proactively ensuring system-level compliance. This requires public authorities, labor inspectorates, and sectoral regulators to take a more assertive role in shaping and enforcing AI norms in the workplace.

What role should labor institutions and workers play?

A central concern in the study is the diminishing role of labor institutions in the digital transformation of work. As algorithmic systems increasingly mediate core employment functions, scheduling, performance evaluation, wage setting, the risk is that traditional mechanisms of worker voice and collective bargaining are bypassed.

To counter this trend, the authors propose strengthening institutional arrangements that facilitate worker participation. Social dialogue, workplace co-determination, and union involvement are reframed not just as labor rights but as governance tools essential for ensuring algorithmic accountability. These bodies can monitor implementation, flag unintended consequences, and challenge unjust system behaviors.

The study also discusses the importance of developing what it calls “new literacies” among workers, technical, legal, and procedural knowledge that enables meaningful engagement with AI systems. Training and capacity-building efforts are viewed as foundational to democratic governance, enabling workers to exercise informed agency.

Moreover, the role of collective institutions extends beyond workplaces. The study calls for transnational coordination across the EU to ensure consistent enforcement of the AI Act, prevent regulatory arbitrage, and foster knowledge-sharing among member states. The envisioned governance architecture is thus multi-level: involving individual companies, national labor authorities, and EU-level institutions in a coordinated oversight ecosystem.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback