AI’s simulated sociality raises cultural and ethical concerns
People may develop emotional dependencies on AI companions, rely on them for decision-making, or misunderstand the limits of their capabilities. These trends underline the need for better public awareness of the distinction between human empathy and machine simulation.

A new special issue highlights how technologies that mimic social behavior, known as artificial sociality, are reshaping the dynamics of human–machine interaction. The collection, titled “Decoding Artificial Sociality: Technologies, Dynamics, Implications” and published in New Media & Society, brings together critical research that examines the rise of AI-powered chatbots, virtual influencers, and voice assistants, and the profound social, cultural, and ethical consequences of their widespread adoption.
The study argues that artificial sociality is not true social interaction but an illusion of sociability, a dynamic in which users project human-like qualities onto AI systems. As tools such as ChatGPT, Gemini, Replika, and Character AI become deeply embedded in everyday life, the special issue urges policymakers, developers, and scholars to consider the far-reaching implications of this shift.
Redefining human–machine relationships in the age of artificial sociality
The study sought to analyse how simulated social behavior changes human–machine relationships. While AI systems lack genuine emotions or consciousness, they are increasingly able to produce responses that resemble empathy, humor, and companionship. This performance of social cues influences how people perceive and interact with machines.
Contributors to the issue highlight that users often ascribe human-like motivations and emotions to AI tools, blurring the line between social and technological interaction. This projection is amplified by the human tendency to respond socially to conversational interfaces. As a result, artificial sociality is becoming normalized, shaping expectations of what communication should look like in digital environments.
The research points out that this growing normalization brings with it new vulnerabilities. People may develop emotional dependencies on AI companions, rely on them for decision-making, or misunderstand the limits of their capabilities. These trends underline the need for better public awareness of the distinction between human empathy and machine simulation.
Data exploitation, bias, and the burden on users
A second major focus of the special issue is the infrastructure that sustains artificial sociality, including the ways in which it exploits user input and perpetuates bias. The editors and contributors note that most AI-driven social systems depend on vast amounts of user-generated data to refine their responses. In effect, users often provide unpaid labor that trains and improves these systems.
For example, the study highlights the experience of Replika users, whose everyday conversations with the chatbot enhance the model’s accuracy and functionality without acknowledgment or compensation. This reliance on what the researchers call affective labor raises questions about fairness and transparency in the data economy.
The issue also draws attention to the social and cultural biases embedded in AI models. By analyzing how generative AI responds to prompts and represents professions, relationships, and behaviors, contributors reveal persistent gender, racial, and class-based stereotypes. These biases, they argue, reinforce inequalities and shape the kinds of social interactions users experience with machines. The environmental costs of data storage and processing for these AI systems further complicate the ethics of their use, pointing to a global resource burden often overlooked in public discourse.
Navigating authenticity, deception, and ethical challenges
The third key question revolves around the ethical dimensions of authenticity and deception in AI-mediated interactions. Contributors examine how simulated sociality can blur the boundary between genuine and artificial engagement, raising issues of trust and accountability.
One example is the rise of virtual influencers, which embody a form of “meta-authenticity”: they present themselves as personable and relatable while being entirely synthetic creations designed for commercial and cultural influence. These entities raise concerns about manipulation and the shifting standards of authenticity in digital communication.
The issue also considers new frameworks for assessing the capabilities of AI systems. The proposed Conversational Action Test, for instance, aims to evaluate an AI’s ability to engage in meaningful, contextually appropriate dialogue rather than simply mimic surface-level conversational patterns. Such approaches could help policymakers and technologists distinguish between tools that support genuine user needs and those that risk misleading or manipulating people.
At the societal level, the editors argue that artificial sociality’s increasing role in everyday life requires interdisciplinary approaches to research and regulation. Combining media studies, algorithmic audits, and ethnographic methods can provide a fuller picture of how these technologies are integrated into social practices and how they should be governed.
- FIRST PUBLISHED IN:
- Devdiscourse