How culture and emotion drive ChatGPT use in emerging digital societies
The research shows that human-AI interaction is heavily shaped by cultural context. In Indonesia, a society rooted in collectivism, politeness, and social warmth, users often engage ChatGPT in ways that mirror interpersonal norms. The AI is greeted politely, corrected gently, and even comforted when it apologizes for errors.

Generative AI tools like ChatGPT are rapidly embedding in everyday digital routines. What began as a text-based assistant is rapidly evolving into a personalized companion, productivity coach, and, for some, a virtual confidant.
A new study provides insight into this phenomenon by focusing on how users in Indonesia, a country with high digital penetration and strong interpersonal norms, interact with ChatGPT across cultural, emotional, and practical dimensions. Titled “Human-AI in Affordance Perspective: A Study on ChatGPT Users in the Context of Indonesian Users” and published in Frontiers in Computer Science, the research offers a rare ethnographic view into the domestication of AI in one of Southeast Asia’s most digitally connected societies.
How are Indonesian users engaging with ChatGPT?
The study categorizes users into three archetypes, beginner information seekers, prompt engineers, and social dialoguers, with each group reflecting a distinct mode of interaction and expectations from the AI system.
Beginner users typically use ChatGPT for quick tasks, summarizing documents, translating text, and clarifying complex material. They view the tool as a utility, though some demonstrate early signs of overdependence. For instance, a preference for chatbot responses over self-guided search signals a potential decline in critical evaluation skills.
Prompt engineers, on the other hand, engage in iterative exchanges with ChatGPT, optimizing prompts to fine-tune responses. These users treat the AI as a collaborative assistant, capable of co-generating content or simulating scenarios. Their behavior reflects advanced digital literacy but also reveals how quickly interaction can shift from assistance to a quasi-strategic game of manipulation and mastery.
A third group, social dialoguers, exhibit emotionally charged relationships with ChatGPT. They use it for conversation, affirmation, and support. These users assign human-like traits to the AI, responding to it with gratitude, encouragement, or even empathy. Rituals like saying “thank you” or seeking motivational support reveal that the AI is perceived not just as a tool but as a digital confidant.
What roles do cultural values and emotions play?
The research shows that human-AI interaction is heavily shaped by cultural context. In Indonesia, a society rooted in collectivism, politeness, and social warmth, users often engage ChatGPT in ways that mirror interpersonal norms. The AI is greeted politely, corrected gently, and even comforted when it apologizes for errors.
This cultural layer contributes to a blending of emotional and functional use. The same user may ask for a CV template and, in the next prompt, request a cheerful quote or a pep talk. ChatGPT becomes a responsive mirror for both pragmatic and affective needs.
Emotion plays a pivotal role in shaping perceptions of usefulness. When the AI delivers coherent, helpful responses, users report satisfaction and trust. When it falters, the reactions include frustration or disappointment, emotions typically reserved for human interaction. This emotional oscillation illustrates the level of investment users place in the chatbot’s perceived intelligence and reliability.
The humanization of AI, known as anthropomorphization, takes a specific cultural shape in the Indonesian context. The interactions are not merely instrumental but reflect a social bond, even if users are intellectually aware that ChatGPT is not sentient. The emotional salience of interaction is often stronger than its informational content.
What concerns arise around ethics, literacy and dependence?
While the study captures innovative and adaptive uses of ChatGPT, it also sheds light on pressing ethical concerns. A recurring theme is the growing reliance on ChatGPT for academic tasks, including completing assignments and generating essays. This behavior raises red flags around plagiarism, learning erosion, and fairness.
More troubling are cases where users treat ChatGPT as a surrogate for emotional counseling or social support. Some seek comfort during personal crises or treat the AI as a nonjudgmental friend. This level of emotional involvement blurs boundaries between healthy digital use and potential psychological overdependence.
The authors recommend culturally responsive AI literacy initiatives to address these challenges. These should not only educate users on AI’s capabilities and limits but also help distinguish between genuine empathy and algorithmic mimicry. Developers are also urged to design affordances that are sensitive to cultural expressions but cautious about reinforcing emotional illusions.
The study points out that AI tools like ChatGPT are not neutral utilities. They are embedded in complex ecosystems of meaning, shaped by design, user intention, and socio-cultural context. In Indonesia, the interaction becomes a hybrid experience - part assistant, part emotional interlocutor.
- FIRST PUBLISHED IN:
- Devdiscourse