Decolonizing artificial intelligence: Rethinking global health futures

Artificial intelligence has become key to healthcare delivery and disease surveillance, but new research warns that its design and deployment risk reinforcing colonial-era inequities. Jarrel De Matas of the University of Texas Medical Branch argues that AI systems must be reprogrammed as ethical and narrative infrastructures that reflect diverse cultural realities rather than reproducing extractive models.
Published in Emerging Media, the study titled Reprograming the Narrative Machine: Toward a Decolonial Ethics of Artificial Intelligence in Global Health examines how AI-enabled health technologies not only process data but also construct narratives that determine whose experiences, perspectives, and futures count in global health.
How do AI health systems reproduce colonial patterns?
The study asserts that AI is not just a computational tool but a narrative machine that encodes worldviews and organizes knowledge. Many AI-driven health interventions in the Global South rely heavily on Euro-American datasets and biomedical framings, sidelining local epistemologies and community traditions. This process reflects what scholars call data colonialism, where human lives and cultural knowledge are extracted, abstracted, and commodified.
According to De Matas, this creates biased systems that flatten complex realities into standardized categories. In healthcare, that means symptom clusters, diagnostic results, or risk models that do not reflect lived experiences. The result is interventions that may appear technologically sophisticated but are ethically shallow and culturally misaligned.
The paper uses the example of Jamaica’s AI-based breast cancer screening project to illustrate how global health technology can privilege external narratives while neglecting local illness experiences. While the screening tool aimed to expand early detection in resource-limited settings, its reliance on geographically biased datasets created risks of misdiagnosis and exclusion. The deeper issue lay in the absence of training on local narratives of illness, fear, and care, which reduced community trust and perpetuated epistemic injustice.
What alternatives show the potential for inclusive AI?
The author compares Jamaica’s case with Ethiopia’s co-designed Amharic-language chatbot. This chatbot was developed with local stakeholders, trained on thousands of health-related questions and answers gathered in Amharic, and built to serve patients unable to travel to hospitals. Its accuracy, accessibility, and cultural resonance demonstrated the potential of participatory design.
Unlike imported technologies that often erase local voices, Ethiopia’s chatbot embedded idiomatic expressions, moral language, and culturally situated understandings of illness into its architecture. Health was not reduced to standardized biomedical categories but expressed through community-centered narratives. This model shows that when AI systems are designed collaboratively and linguistically grounded, they can provide equitable and meaningful care.
The study identifies three domains where decolonial approaches can shift AI design. First, participatory design ensures narrative sovereignty by embedding marginalized voices from the earliest stages of development. Second, ethical audits must move beyond technical performance to assess epistemic inclusion, preventing testimonial and hermeneutical injustice that silences community knowledge. Third, policy structures should establish relational forms of data governance, ensuring that data remains under the stewardship of local communities and is governed by ongoing consent rather than one-time transactions.
Why is policy change essential for ethical AI in global health?
Ethical integration of AI requires more than technical fixes. Global health AI systems are embedded within political and colonial contexts that often privilege northern dominance and external control. Without governance reforms, these systems risk deepening inequities and eroding trust.
According to the study, international frameworks from UNESCO, WHO, and the Pan American Health Organization that advocate relational data governance. These approaches demand that consent is treated as a continuous dialogue and that communities retain meaningful control over their data. For regions such as the Caribbean, robust policies must prohibit data export without safeguards and prioritize the development of locally curated datasets that reflect cultural diversity, linguistic plurality, and specific public health needs.
The study also calls for investments in local infrastructure, technical capacity, and AI literacy. Without adequate resources for cloud computing, data science training, and community institutions, even the most ambitious ethical frameworks cannot be realized. Effective governance, therefore, must be matched by financial and institutional support that empowers communities to lead innovation rather than remain passive recipients.
- FIRST PUBLISHED IN:
- Devdiscourse