AI systems reinforce ageism, excluding older adults from digital futures
The study reveals that ageism manifests in artificial intelligence through biased datasets, flawed design practices, and structural exclusion. While issues of gender and racial bias in AI have attracted global attention, age-based bias has been largely overlooked despite its far-reaching implications.

Artificial intelligence is transforming healthcare, employment, and digital services, but new research warns it may also be reinforcing one of society’s most pervasive prejudices: ageism.
A study titled Breaking Bias: Addressing Ageism in Artificial Intelligence, published in the Journal of Ageing and Longevity, finds that older adults remain significantly underrepresented in AI datasets, leading to discrimination in both digital environments and real-world decision-making. Authored by Diana Amundsen of the University of Waikato, the study presents a sobering picture of how AI’s promise of innovation risks entrenching inequality if age-related bias is not addressed.
How does ageism appear in AI systems?
The study reveals that ageism manifests in artificial intelligence through biased datasets, flawed design practices, and structural exclusion. While issues of gender and racial bias in AI have attracted global attention, age-based bias has been largely overlooked despite its far-reaching implications.
A systematic review of 47 studies published between 2015 and 2025 found consistent underrepresentation of older adults in widely used AI datasets. For example, benchmark datasets such as AI-Face, CrowS-Pairs, and Casual Conversations v2 include fewer than 10 percent of entries representing older people. This gap results in models that are less accurate when analyzing older adults’ data, from facial recognition to natural language processing.
The problem extends beyond technical accuracy. When datasets reinforce stereotypes, such as assumptions that older adults are technologically incompetent or less capable in workplaces, algorithms replicate and magnify these biases. The result is algorithmic exclusion: older people may be misdiagnosed by healthcare AI, unfairly ranked lower by hiring systems, or sidelined by digital platforms designed without their needs in mind.
As per the research, these biases are not incidental but reflect broader social patterns of structural ageism. By encoding these stereotypes into digital systems, AI threatens to normalize discrimination against a growing global population of older adults.
What are the risks of ignoring ageism in AI?
The study highlights that ignoring ageism in AI creates systemic risks across key sectors. In healthcare, underrepresentation can cause AI diagnostic tools to misinterpret symptoms or medical histories of older patients, potentially delaying treatment or producing inaccurate outcomes. In employment, automated recruitment systems trained on biased data may filter out older candidates or reinforce discriminatory hiring practices.
Digital services also present barriers. Older adults often face exclusion when platforms use design elements, such as rapid interface changes or reliance on biometric verification, that fail to accommodate age-related diversity in ability or accessibility. These practices deepen digital divides and reduce trust in emerging technologies.
The risks are magnified by demographic change. With the global population of people aged 65 and older projected to rise to one in six within the next decade, ageism in AI could affect hundreds of millions of people. By perpetuating stereotypes at scale, biased AI threatens to exacerbate inequality rather than reduce it.
The review further warns that competition among tech companies may inadvertently worsen ageist practices. In the race to improve efficiency and scale, firms often prioritize speed over inclusivity, sidelining older users who may not conform to dominant data profiles. This approach risks embedding ageism deeply into the fabric of AI systems.
What solutions can address digital ageism?
The study proposes a four-part framework to address ageism in artificial intelligence. First, inclusive dataset development must become a priority. Older adults should be actively involved in data collection, design, and auditing processes to ensure that AI reflects the diversity of aging populations.
Second, ethical design principles are essential. Applying fairness metrics that explicitly measure age-related bias can help developers identify and correct harmful patterns. Co-designing technologies with older users can also improve usability and prevent exclusionary practices.
Third, regulatory reform is urgently needed. Amundsen argues that legal frameworks should explicitly recognize age as a protected characteristic in AI governance. While the European Union’s AI Act, Data Act, and Digital Services Act provide models for stronger protections, many jurisdictions, including the United States, continue to focus narrowly on competition and consumer welfare without addressing age-specific harms.
Fourth, education and capacity building are necessary to sustain long-term change. Integrating age-related fairness into AI ethics training, computer science curricula, and interdisciplinary research can help reshape the next generation of developers and policymakers. By embedding awareness of ageism at every stage of AI development, the risks of digital exclusion can be reduced.
- FIRST PUBLISHED IN:
- Devdiscourse