Who owns digital persona of the dead? AI chatbots challenge privacy norms
Rather than simply replicating a static persona, these bots are designed to evolve. They “think” by extrapolating from existing data and even generate new opinions, making them more than replicas and more like dynamic representations of the deceased. This raises immediate concerns about the authenticity, autonomy, and dignity of the digital identity being constructed

As artificial intelligence increasingly blurs the boundaries between life and digital afterlife, new legal and ethical questions are emerging. A new study titled “No Peace After Death? The Impact of AI-Driven Memorial Chatbots on Privacy and Data Protection”, published in the journal Information, explores the growing digital afterlife industry (DAI), focusing on AI systems that recreate deceased individuals as chatbots.
Authored by Jacopo Ciani Sciolla and Ugo Pagallo of the University of Turin, the paper delivers a sobering legal analysis of the commercial, ethical, and normative implications of post-mortem AI technologies that simulate the voices, personalities, and behaviors of the dead.
How AI-driven chatbots are reanimating the digital dead
The research details how memorial chatbots are no longer the realm of science fiction. Real-world cases such as Microsoft’s patented chatbot system and Google’s personality-enabled robot patents demonstrate that companies now possess the technical capability to emulate a deceased person’s conversational and behavioral style using personal data left behind during life . These AI systems draw from emails, social media, text messages, and voice recordings to construct simulated personalities - digital surrogates that continue to interact after death.
Rather than simply replicating a static persona, these bots are designed to evolve. They “think” by extrapolating from existing data and even generate new opinions, making them more than replicas and more like dynamic representations of the deceased. This raises immediate concerns about the authenticity, autonomy, and dignity of the digital identity being constructed.
The authors argue that while these tools may offer emotional comfort to the bereaved, they risk commodifying the "informational body" of the dead. The merging of generative AI with memorialization turns human identity into a marketable asset, potentially without the deceased’s explicit consent.
What are the privacy and legal risks of posthumous AI?
The study underscores that many jurisdictions operate under a "data freedom" regime, where deceased persons are not legally granted data protection rights under instruments like the EU’s General Data Protection Regulation (GDPR). Although some countries like Italy, Spain, Denmark, and Estonia provide post-mortem rights to control or limit data use, most do not. This legal vacuum enables AI companies to train chatbots on personal data without oversight or accountability.
In Italy, for example, the law permits data protection claims after death but only grants heirs or designated individuals the right to intervene - often reactively, rather than proactively. The system lacks mechanisms for pre-consent or clear instructions regarding AI-driven uses of personal data after death.
The researchers also highlight the tension between privacy and data protection. While privacy concerns the individual’s dignity and selfhood, data protection involves transparency and regulatory controls on information use. Both are undermined when memorial bots evolve beyond the parameters set by the data subject during life. The AI might say or do things the deceased never intended, leading to misrepresentation or emotional harm to living users who interact with them.
Further, under the EU’s AI Act, these technologies may skirt regulation. AI memorial bots aren’t explicitly labeled as “high-risk,” though they carry significant ethical and emotional implications. Article 50 of the Act mandates transparency, requiring users to be informed they are interacting with AI, but doesn’t address consent or representational fidelity after death.
Is there a path toward normative and ethical regulation?
The authors propose that existing laws must be reformed to better address the intersection of AI, identity, and death. They recommend the adoption of a coordinated legal framework across EU Member States to minimize fragmentation and inconsistency. The paper suggests that moral principles, such as the Kantian notion of “bona fama defuncti” or posthumous dignity, should be translated into enforceable legal rights.
For instance, Germany recognizes post-mortem data as inheritable, applying principles of universal succession to digital identities. In contrast, many other states leave the regulation of deceased individuals’ data to contractual platforms or weak, discretionary laws.
The Italian case study reveals both the strengths and shortcomings of existing legislation. While the country has extended data protection rights after death, it lacks proactive safeguards and clear coordination with intellectual property, inheritance, and defamation laws. Conflicting responsibilities between heirs, designated contacts, and family members further complicate enforcement.
The study ends with a call for legal systems to treat posthumous data rights not as fringe considerations, but as central to democratic governance in the digital age. Coordinated laws should ensure that AI applications preserve not only the technological accuracy of the deceased’s likeness but also their ethical and emotional legacy. This includes embedding moral safeguards into AI design, empowering users with opt-in/opt-out mechanisms, and enforcing rights through interoperable legal standards.
- READ MORE ON:
- AI memorial chatbots
- posthumous data rights
- digital afterlife privacy
- chatbot data protection
- legal implications of AI chatbots for the deceased
- privacy challenges in AI-driven digital afterlife
- regulating AI replicas of deceased individuals
- posthumous digital identity
- emotional AI and grief technologies
- FIRST PUBLISHED IN:
- Devdiscourse