AI could help end hunger, but poor oversight may undermine progress


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-04-2026 13:13 IST | Created: 24-04-2026 13:13 IST
AI could help end hunger, but poor oversight may undermine progress
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) can improve food aid delivery and resource allocation, but its real-world impact depends heavily on governance, institutional trust, and ethical safeguards, especially in politically unstable environments.

A new study, titled “Digital Exclusion or Zero Hunger? A Sustainability Review of Ethical AI in Fragile Contexts,” published in Sustainability, examines how AI is being used in humanitarian food security systems and whether it can meaningfully contribute to the United Nations’ Zero Hunger goal in crisis settings.

The research presents a critical analysis of AI-driven humanitarian interventions, using the Gaza Strip as a central case study. The paper moves beyond technical discussions to frame AI as a socio-political tool that interacts with power structures, institutional fragility, and digital inequality.

AI promises efficiency in food aid, but risks reinforcing inequality

The study finds that AI technologies are increasingly embedded in humanitarian operations, particularly in food security programs. Tools such as predictive analytics, machine learning models, and digital beneficiary management systems are being used to forecast food shortages, identify vulnerable populations, and optimize aid distribution.

Early warning systems powered by AI are capable of analyzing vast datasets, including climate patterns, market signals, and conflict indicators, to predict food insecurity before it escalates. These systems allow humanitarian agencies to act proactively, improving response times and resource allocation.

Digital targeting systems use algorithmic models to determine who qualifies for aid and how much assistance they receive. These systems can increase efficiency and reduce fraud, but they also introduce new risks. Decisions that were once made by human field workers are now influenced or determined by algorithms, raising concerns about transparency and accountability.

The study highlights that these systems often rely on incomplete or biased datasets. Populations that are not well represented in digital records, such as displaced individuals or those without formal identification, may be excluded from assistance. This creates a paradox in which those most in need are the least visible to AI-driven systems.

In addition, the research notes that AI can amplify existing inequalities. Algorithms trained on historical data may reproduce patterns of exclusion, reinforcing structural disadvantages rather than correcting them. Without safeguards, AI systems risk becoming tools that magnify inequality under the guise of efficiency.

The findings suggest that while AI can improve operational performance, its benefits are unevenly distributed and highly dependent on the quality and inclusivity of data.

Digital humanitarian systems reshape governance and access to aid

The study argues that AI is transforming how humanitarian systems are governed. As digital tools become key to aid delivery, they effectively act as governance mechanisms, determining eligibility, prioritizing needs, and structuring access to resources. This shift marks a transition from traditional, field-based decision-making to data-driven systems where authority is increasingly embedded in algorithms. In fragile contexts, where state institutions are weak or contested, these systems can operate alongside or even replace formal governance structures.

The research introduces the concept of algorithmic governance to describe this transformation. In this model, decision-making is distributed between human actors and automated systems, often without clear accountability. This raises critical questions about who controls these systems and whose interests they serve.

Digital inclusion and exclusion emerge as central issues. To access AI-enabled humanitarian systems, individuals must be digitally visible, meaning they are registered, documented, and connected to digital infrastructure. Those lacking smartphones, internet access, or digital literacy risk being excluded from assistance altogether.

Digital exclusion, as the study notes, is not just a technical problem but a reflection of broader social inequalities. Women, elderly individuals, and marginalized groups are particularly at risk of being left out of digital systems, further deepening existing disparities.

Additionally, the push for efficiency in humanitarian operations can conflict with principles of justice and fairness. While AI systems can process large volumes of data quickly, they may lack mechanisms for appeal or correction, limiting the ability of affected individuals to challenge decisions.

This tension between efficiency and equity is a recurring theme in the research. The study argues that without strong governance frameworks, the adoption of AI in humanitarian systems may undermine trust and legitimacy.

Case study reveals limits of AI in crisis conditions

The Gaza Strip serves as a key example of how AI-enabled systems operate in highly constrained environments. The study shows that digital tools, including electronic vouchers and remote sensing technologies, have enabled humanitarian organizations to continue delivering aid despite severe restrictions.

However, the case also reveals significant limitations. Structural challenges such as damaged infrastructure, restricted access, and political constraints limit the effectiveness of digital systems. Even when AI improves targeting and coordination, it cannot overcome fundamental issues such as supply shortages or market collapse.

Digital-only systems often fail to reach the most vulnerable populations. A significant portion of households lack access to smartphones or reliable internet, making it difficult to participate in digital aid programs. Connectivity disruptions further exacerbate these challenges, preventing consistent access to assistance.

The study also highlights economic barriers. In some cases, recipients face high costs when converting digital aid into usable resources, reducing the overall value of assistance. This underscores the gap between technological solutions and real-world conditions.

Another critical issue is data vulnerability. In conflict settings, personal data collected through digital systems can be exposed to misuse or surveillance. This raises serious concerns about privacy and safety, particularly in highly securitized environments.

The Gaza case shows a broader point. AI can enhance the efficiency of aid delivery, but it cannot address the root causes of food insecurity, such as political instability, economic disruption, and restricted access to resources.

Ethical governance emerges as key to sustainable AI use

To address these challenges, the study proposes a comprehensive framework for ethical AI deployment in fragile contexts. This framework is built on five interconnected pillars that aim to balance technological innovation with social and institutional considerations.

  • Data sovereignty and protection: It focuses on the need to safeguard personal information and prevent misuse. In humanitarian settings, data security is closely linked to individual safety, making robust protections essential.
  • Algorithmic accountability and human oversight: AI systems should support, rather than replace, human decision-making, with clear mechanisms for transparency, auditing, and appeal.
  • Inclusive system design: AI tools must be accessible to diverse populations, including those with limited digital literacy or connectivity. Hybrid models that combine digital and traditional approaches are essential to ensure broad access.
  • Community-led governance: Local communities should play an active role in designing and managing AI systems, ensuring that interventions reflect local needs and priorities.
  • Market and socioeconomic integrity: AI-enabled aid systems must align with real economic conditions, ensuring that assistance translates into tangible improvements in food security.

These pillars highlight the interconnected nature of ethical AI deployment. Each element reinforces the others, creating a system that prioritizes both efficiency and equity.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback