Awareness, not just transparency, is key to responsible AI adoption
The study reveals that AI awareness is not distributed equally across demographic groups. Men are consistently more likely than women to recognize AI in personal technologies such as voice assistants, streaming platforms, or social media feeds. Age also plays a crucial role, with younger individuals demonstrating greater awareness than older participants.

Artificial intelligence is increasingly embedded in daily life, yet awareness of its presence is far from evenly spread across society. A new study examines how different social groups recognize AI in everyday technologies and how this awareness influences beliefs about trust, control, and the role of AI in society.
The research, titled Seeing is believing: societal differences in AI awareness and the link to AI-related beliefs and published in AI & Society, shows that gender, age, and education are significant predictors of AI awareness. The findings carry major implications for how transparency and explainability, the foundations of responsible AI, should be approached in practice.
The study is based on survey data from more than 600 participants in the United Kingdom and Australia.
Who is more likely to recognize AI in everyday life?
The study reveals that AI awareness is not distributed equally across demographic groups. Men are consistently more likely than women to recognize AI in personal technologies such as voice assistants, streaming platforms, or social media feeds. Age also plays a crucial role, with younger individuals demonstrating greater awareness than older participants.
Education further shapes awareness, particularly in the UK, where individuals with degrees showed significantly higher levels of recognition compared with those without. These gaps become even sharper when demographics intersect, meaning older women with lower educational attainment are the least likely to recognize AI in the technologies they use every day.
The authors argue that these awareness gaps mirror broader digital divides that have long shaped access to technology. As AI becomes more embedded across sectors, those who remain unaware of its presence may find themselves disadvantaged not only in terms of use but also in trust, acceptance, and control over how AI affects their lives.
How does awareness influence beliefs about AI?
Beyond identifying who is most aware, the study explores how awareness correlates with attitudes toward AI. The results suggest a clear link: those who recognize AI in their everyday technologies tend to hold more positive beliefs about its trustworthiness, usefulness, and manageability.
In the UK, a strong relationship was observed between general AI awareness and favorable views of AI’s future impact. Participants who were aware of AI were more likely to believe it would contribute positively to their personal futures. Conversely, individuals with low awareness often expressed skepticism, distrust, and a diminished sense of control.
This connection indicates that awareness acts as a gateway to forming balanced perspectives on AI. Without recognition that AI underpins many common technologies, people may be left with abstract fears or misplaced skepticism. The study highlights that efforts to build trust in AI cannot rely solely on making systems transparent or explainable, because those measures are ineffective if users do not even realize AI is at work.
What does this mean for responsible AI development?
The authors argue that AI awareness must be treated as a prerequisite for responsible development and deployment. Transparency and explainability have become central to ethical guidelines worldwide, but these principles risk becoming hollow if public awareness is low. If people cannot identify AI within the tools they use, explanations of how systems function will have little practical meaning.
Bridging the awareness gap, therefore, becomes a central challenge. The study points to targeted strategies, including AI literacy initiatives that reach women, older adults, and individuals with lower educational attainment. Without such efforts, existing social divides could deepen, leaving some groups excluded from the benefits of AI while others shape the discourse and direction of its adoption.
The findings also call attention to policy. Regulators and developers must recognize that inequalities in awareness can influence not just acceptance but also the fairness of AI’s social rollout. Designing educational programs and public information campaigns tailored to underrepresented groups will be key to achieving more equitable outcomes.
- FIRST PUBLISHED IN:
- Devdiscourse