Practical knowledge matters more than algorithmic transparency in AI for care

The research highlights the misalignment between policy expectations and practical implementation. While public discourse often assumes that explainability equals algorithmic transparency, the study shows that stakeholders in care environments need contextualized explanations that fit their decision-making processes and responsibilities. This gap limits the perceived usefulness and reliability of AI tools in daily operations.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 09-10-2025 13:49 IST | Created: 09-10-2025 13:49 IST
Practical knowledge matters more than algorithmic transparency in AI for care
Representative Image. Credit: ChatGPT

A new open forum article published in AI & Society argues that explainable artificial intelligence (XAI) in long-term care must be tailored to the practical needs of different stakeholders rather than presented as one-size-fits-all transparency. The study, titled “Only Practical Knowledge or Knowing the Algorithm? Notions and Necessities of Explainable Artificial Intelligence in Long-Term Care,” reveals that the meaning and value of explainability vary across companies, care managers, and care providers, making context-specific solutions crucial for adoption and trust.

As AI gains traction in health and long-term care, policymakers hope it will enhance autonomy and safety for older adults living at home. Yet the lack of suitable explainability continues to be a barrier, with many AI systems still functioning as black boxes that limit their integration into real-world practice.

Diverging notions of explainability in long-term care

The study finds out what stakeholders in long-term care actually mean by explainable AI and how their needs differ. To investigate this, the authors conducted 21 scenario-based interviews with professionals in three settings: companies that develop or deploy AI, managers responsible for organizing care services, and frontline care providers.

The interviews revealed that each group views explainability differently.

  • Companies often see explainability as a feature to build trust in their products and as a requirement for meeting regulatory standards.
  • Managers seek explainability that supports decision-making on resource allocation, staff workflows, and compliance with legal frameworks.
  • Care providers prioritize practical, usable insights, they want explanations that clarify how AI recommendations apply to specific care situations, rather than technical details of algorithms.

This divergence shows that providing all stakeholders with algorithmic transparency alone does not address their needs. For caregivers, for example, knowing why a tool recommends a change in a patient’s care plan is far more valuable than understanding the underlying mathematics.

Policy hopes versus practical realities

Next up, the authors analyse why explainability remains a persistent challenge despite policy enthusiasm for AI in long-term care. Policymakers and technology developers have long envisioned data-driven tools to improve the safety, personalization, and efficiency of home-based care for older adults. However, most such systems remain in pilot phases and have yet to become part of routine care.

The research highlights the misalignment between policy expectations and practical implementation. While public discourse often assumes that explainability equals algorithmic transparency, the study shows that stakeholders in care environments need contextualized explanations that fit their decision-making processes and responsibilities. This gap limits the perceived usefulness and reliability of AI tools in daily operations.

Additionally, the persistence of black-box machine-learning models undermines trust. Frontline staff often struggle to see how AI recommendations relate to their work and may resist using tools they cannot interpret. The study underscores that addressing these practical hurdles is as important as improving the technical quality of AI systems.

Toward Tailored and Inclusive XAI Solutions

The third key question raised by the study focuses on how explainability can be redefined to meet stakeholder needs and enhance trust in AI-supported care. The authors argue that XAI must be purpose-driven and audience-specific, with explanations designed to support the decisions and workflows of each stakeholder group.

For care providers, this could mean simple, actionable guidance that clarifies why a recommendation is made and how it affects an individual patient’s care plan. For managers, explainability might focus on performance metrics and compliance reporting, while companies may need to communicate how their systems adhere to regulatory and ethical standards.

The authors recommend extending research to include older adults themselves as stakeholders in future studies. This approach would help determine what forms and degrees of explainability are most useful for those receiving care and ensure that XAI is designed not only for professionals but also for the people it ultimately serves.

The study also calls for research on how XAI is enacted in real-world care settings. Understanding the interaction between technical features of explainability and the practical contexts of use can inform guidelines for developers and policymakers seeking to make AI tools both trustworthy and effective.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback