Cultural bias and algorithmic injustice threaten AI’s role in disability inclusion

A new paper sheds light on the complex ways in which artificial intelligence (AI) is shaping the lives of disabled people. The study, titled “AI and Disabled People’s Independent Living: A Framework for Analysis” and published in AI & Society, sets out a structured framework for understanding both the benefits and risks of AI for independent living (IL). Based on disability studies and the social model of disability, the research highlights that technology’s impact depends not only on innovation itself but also on how it is embedded in everyday practices and social institutions.
The paper’s analysis reveals that while AI can empower disabled people by improving access, autonomy, and communication, it can also reinforce ableist assumptions, compromise privacy, and entrench inequality. The author argues that any meaningful use of AI in disability contexts must move beyond a narrow focus on individual self-sufficiency and recognize interdependence as a foundation of inclusive societies.
How AI enables independent living for disabled people
The study identifies a wide range of AI-mediated enablers that enhance the capacity of disabled people to live independently. At the situated, everyday level, tools such as smart home systems, automated captioning for the hearing-impaired, AI-driven navigation aids, and eye-tracking software for people with severe mobility limitations can improve access and personalization of support services. These technologies offer more choice and control in daily routines, align with IL principles, and often reduce the burden on caregivers.
The research also points to the positive role of AI in communication and participation. For example, neutral mediation by AI translation or speech-to-text tools can reduce interpersonal bias and enable fuller engagement in workplaces, education, and public spaces. These forms of communicative neutrality allow disabled individuals to interact without relying on intermediaries, promoting dignity and self-representation.
At a cultural-institutional level, the framework highlights how AI can help dismantle stigma by mainstreaming assistive technologies and normalizing human–machine hybridity. Cost efficiencies in healthcare and support services can expand access, while a better societal understanding of the interdependent nature of support challenges traditional ideals of self-sufficiency.
Where AI risks undermining disability rights
The study warns that the same technologies can become barriers to independent living when poorly designed or deployed. At the situated level, the author identifies concerns such as technological unreliability, where devices fail in critical situations, and techno-paternalism, where automated systems override individual preferences. The risk of techno-ableism, embedding biased assumptions about normal functioning into algorithms, can marginalize users further instead of empowering them.
Other risks include techno-productivism, where support tools are evaluated primarily on economic productivity gains rather than well-being, and significant privacy threats arising from continuous data collection by AI-enabled devices. These issues highlight the importance of accountability, data security, and user-centered safeguards.
At the broader cultural-institutional level, the paper highlights structural risks. Algorithmic injustice in decision-making can perpetuate existing social inequalities. A persistent societal overvaluation of self-sufficiency, reinforced by some AI applications, risks sidelining the principle of interdependence that underpins disability rights. Meanwhile, techno-fetishism, placing undue faith in technology alone, can obscure human and political dimensions of care and inclusion.
Building AI for genuine inclusion
Realizing the promise of AI for disabled people requires participatory and transparent design practices. The author calls for co-design with disabled people and their representative organizations, ensuring that technologies are developed to meet real-world needs and do not impose external definitions of independence.
The framework also recommends archivist curation of training data to preserve diversity in datasets and minimize bias, along with fair machine learning methods and open-source models to increase transparency and trust. Recognizing the environmental costs of AI, the author argues that sustainable deployment must be part of inclusive innovation strategies.
The research advocates a shift away from measuring progress by self-sufficiency alone. Instead, policies and technologies should support mutual interdependence, acknowledging that inclusive societies rely on networks of shared assistance. This reframing is key to ensuring that AI serves as an enabler rather than a barrier to full participation.
- FIRST PUBLISHED IN:
- Devdiscourse