Digital Hauntings: Are Deadbots Helpful or Harmful?

Why Trust Techopedia

There isn’t a problem the AI industry won’t try to solve. But can it tackle life’s greatest certainty – death?

Didier Coeurnelle, a transhumanist, and co-president of the Healthy Life Extension Society, proposes three ways that AI could kill death. 1. Man could be merged with machine. 2. Consciousness could be reproduced and uploaded to a computer. 3. A virtual copy of a dead person could be produced.

Some might wonder whether any of this is currently possible. To the dismay of mediums, the answer is yes.

Thanabots, also known as deadbots or digital ghosts, are the incredibly profitable realization of Coeurnelle’s third proposition. Currently valued at over £100 billion, the growth of the DeathTech industry shows no signs of slowing down.

Promoted as tools that help with grief, deadbots take the idea of a living legacy to a whole new level. However, health professionals warn that this digital necromancy is more harmful than helpful.

Key Takeaways

  • Deadbots offer a way to maintain a relationship with the deceased.
  • Project December, HereafterAI, and DeepBrain AI are prominent examples of DeathTech trained on the data of the deceased to provide a range of interactive experiences.
  • Ethicists claim that such technology could have a devastating impact on the grieving process.
  • Deadbots also present ethical issues concerning the consent, dignity, and privacy of the deceased.

How to Talk to the Dead? The Rise of Deadbots

Deadbots have haunted cyberspace for several years, allowing paying customers to converse with the deceased by emulating their responses.

Advertisements

Among the numerous developers, Jason Rohrer is perhaps the most well-known. Back in 2020, Rohrer used OpenAI’s GPT-3 API to create a system that enables the personal customization of chatbots. Rohrer called his invention Project December.

As the website communicates, the goal is “to simulate a text-based conversation with anyone…including someone who is no longer living.”

A particularly intense example of the simulator’s use occurred in 2021 when Joshua Barbeau began communicating in text form with his fiancée, Jessica Pereira, who had been dead for eight years. Barbeau fed the deadbot a mixture of Jessica’s social media posts and SMS messages, which it used to emulate her personality with surprising accuracy.

However, the GPT-3 model of Project December was not long for this world. The Barbeau case violated OpenAI’s prohibitions on its tech being exploited for sexual, romantic, self-harm, or bullying objectives. Rohrer couldn’t see the problem. He claimed that “OpenAI’s approach [was] hyper-moralistic” and that Barbeau was a consenting adult. The potential rights of poor Jessica didn’t get a mention.

Another notable developer is James Vlahos, who transformed an oral history project of his father’s life story into a talking deadbot. This surpasses Project December’s spookiness because the AI converses with James in his father’s voice.

The BBC recently reported that Vlahos wanted to find “a way to more richly keep his [father’s] memories and some sense of his personality.”

In 2019, Vlahos founded HereafterAI, an app that affords its users the same experience…if they dare.

And if you were wondering whether DeathTech could get any stranger, it can.

By “shooting hours of video and audio to capture [someone’s] face, voice and mannerisms,” South Korea’s DeepBrain AI can create a “video-based avatar of a person.”

Michael Jung, DeepBrain’s chief financial officer, told the BBC: “We are cloning the person’s likeness to 96.5% of the similarity of the original person.”

I’m not sure you could get much closer to a Black Mirror episode.

What Are the Ethical Concerns With Deadbots?

The editor-in-chief of TechRound, David Soffer, believes that deadbots are a positive innovation:

“When technology evolves to solve technological problems, that’s good,” he asserts, “but when it helps solve non-technological problems, like the grieving process, that’s the real purpose of technology.”

However, many others are concerned about the negative impact that an AI death bot could have on the grieving process.

Fiorenza Gamba, a socio-anthropologist at the University of Geneva, has stated that deadbots “can in some cases plunge the still-living into an inability to move on from mourning.”

Similar concerns have been expressed in a recent research article by ethicists from Cambridge’s Leverhulme Centre for the Future of Intelligence.

The paper titled, ‘Griefbots, Deadbots, Postmortem Avatars: on Responsible Applications of Generative AI in the Digital Afterlife Industry,’ describes how DeathTech companies could potentially exploit the software to bombard relatives with “sporadic advertisements” featuring digital recreations of their loved ones.

According to the researchers, this experience could lead to service users feeling like they are being “stalked by the dead.”

Dr. Tomasz Hollanek, one of the paper’s co-authors, warned:

“These services run the risk of causing huge distress to people if they are subjected to unwanted digital hauntings from alarmingly accurate AI recreations…The potential psychological effect, particularly at an already difficult time, could be devastating.”

However, not just the living need to be protected. Several speakers at the 14th European Forum on Bioethics stressed the importance of establishing whether the dead–when alive–would have consented to their data being used. In addition, there was much discussion about respecting the privacy and dignity of the deceased.

The issue of consent is something that Sara Suárez-Gonzalo addresses in an article for The Conversation. A salient question emerges from her reflections on the Joshua Barbeau case:

“If we agree that it is unethical to use people’s data without their consent while they are alive, why should it be ethical to do so after their death?”

She also highlights that “even if the dead cannot be harmed or offended in the same way [as] the living, this does not mean that they are invulnerable to bad actions.”

The dead, she argues, “can suffer damages to their honor, reputation or dignity…and disrespect toward the dead also harms those close to them.”

So, if we create AI bots to talk to, especially ones resembling the dead, we must apply extreme and thorough caution.

It might be simpler if we allow the dead to remain buried.

The Bottom Line

The Leverhulme study advises that companies should be more transparent about how DeathTech is used and ensure that safety measures are in place to terminate relationships with deadbots.

Who knows what extended periods of engagement with these digital cadavers could do to the living? There is such a thing as “pathological mourning,” but what if the difference between reality and simulation becomes unclear? The damage to a person’s mental health could be colossal.

With all that being said, if you are wondering about how to talk to the dead or if you start to entertain the idea of dabbling in DeathTech, it might be safer to lay that idea to rest.

FAQs

What is death technology?

Are grief bots ethical?

What are Thanabots?

Advertisements

Related Reading

Related Terms

Advertisements
John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He holds a degree in Creative Writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advancements. When he's not thinking about LLMs, he enjoys running, reading and writing songs.