AI Must Serve Humanity: UN Experts Urge Rights-Based Global AI Governance

“AI systems are transforming our societies,” Jakulevičienė stated. “But without proper guardrails, they can seriously undermine human rights.”


Devdiscourse News Desk | Geneva | Updated: 20-06-2025 12:34 IST | Created: 20-06-2025 12:34 IST
AI Must Serve Humanity: UN Experts Urge Rights-Based Global AI Governance
The Working Group's new report emphasizes that without robust safeguards, AI technologies risk becoming tools of discrimination, surveillance, and exclusion. Image Credit: ChatGPT

 

In a pivotal address at the 59th session of the Human Rights Council, the United Nations Working Group on Business and Human Rights, led by Chairperson Lyra Jakulevičienė, issued a clarion call for nations and corporations to ensure the development, procurement, and deployment of artificial intelligence (AI) systems are aligned with international human rights norms. The Working Group's new report emphasizes that without robust safeguards, AI technologies risk becoming tools of discrimination, surveillance, and exclusion.

“AI systems are transforming our societies,” Jakulevičienė stated. “But without proper guardrails, they can seriously undermine human rights.”

Fragmented Global Governance and Growing Gaps

The Working Group’s report highlights a regulatory patchwork: while some countries are moving from voluntary AI ethics to binding legal frameworks, the global landscape remains fragmented. There are no universal definitions for critical AI functions, and legal protections often exclude civil society from meaningful participation. Particularly alarming is the limited integration of the Global South’s perspective in shaping AI governance structures, risking a continuation of digital colonialism.

“Exceptions to protections are broad, and civil society remains largely on the margins,” the report noted.

This fragmentation not only undermines consistent safeguards but also allows corporations and states to exploit regulatory loopholes when deploying AI systems with serious rights implications.

Risks to Fundamental Rights: Surveillance, Discrimination, and Inequality

When deployed irresponsibly, AI technologies have demonstrated the potential to violate a wide spectrum of rights. These include:

Discriminatory bias in facial recognition and hiring algorithms;

Privacy violations through mass surveillance and data harvesting;

Exclusion of marginalized groups from essential services such as housing, healthcare, or education;

Erosion of democratic accountability, particularly through opaque predictive policing systems.

The report reiterates that both public institutions and private companies must conduct thorough human rights due diligence (HRDD) throughout the AI lifecycle—from procurement to deployment and eventual decommissioning.

Clear Red Lines: Technologies Fundamentally Incompatible With Human Rights

The UN experts call for immediate bans on AI systems deemed irreconcilable with fundamental rights. These include:

Remote real-time facial recognition in public spaces;

Predictive policing algorithms;

Mass biometric surveillance systems.

“States must draw red lines,” said Jakulevičienė, “and not only regulate AI—they must use their power as procurers to demand accountability and transparency from the private sector.”

The Working Group called on governments to serve not only as regulators but as ethical consumers and implementers of AI, setting a global precedent for responsible technology use.

Business Responsibilities: Human Rights Beyond Profit

Corporations, especially those developing or licensing AI, must recognize that human rights responsibilities cannot be outsourced. Even when using third-party AI tools, companies are obligated to:

Perform human rights impact assessments (HRIAs);

Ensure transparency and explainability of AI systems;

Offer access to remedy when harm occurs;

Engage in meaningful consultations with affected communities.

This accountability applies equally to tech giants, governments outsourcing AI tools, and downstream business users adopting AI in hiring, marketing, or governance.

Path Forward: Principles and Recommendations

The Working Group’s report closes with a roadmap of actionable recommendations:

For States: Establish rights-based legal frameworks; require AI impact assessments; set prohibitions on abusive technologies; and embed accountability mechanisms.

For Businesses: Incorporate the UN Guiding Principles into procurement and R&D; engage marginalized communities; ensure internal governance of AI risks.

For Civil Society and Academia: Advocate for inclusive governance; expose AI-related harms; contribute to norm development.

Global cooperation and the establishment of cross-border standards are urgently needed to address the transnational nature of AI technology and its impact on rights.

Rights-Respecting AI Is Not Optional

As AI tools become central to everything from border security to social media moderation, the human rights implications of these technologies must not be an afterthought. The UN Working Group’s call to action signals a critical moment for aligning innovation with justice.

“AI must advance our shared humanity—not automate our inequalities,” Jakulevičienė concluded.

Unless governments and corporations heed this warning and commit to a rights-based AI future, the promise of AI will be eclipsed by its peril.

Give Feedback