OpenAI Warns of Potential Emotional Attachments to ChatGPT

Why Trust Techopedia
Key Takeaways

  • OpenAI's recent blog post highlights concerns about people forming emotional attachments to ChatGPT.
  • The risks include "anthropomorphization and emotional resilience," which could affect users' real-world relationships.
  • OpenAI cautions that these connections, while seemingly harmless, may disrupt healthy human interactions.

OpenAI raises concerns about users developing emotional connections with ChatGPT, warning of the potential impact on real-life human interactions.

In the post, the AI giant mentioned early tests showing “users using language that might indicate forming connections with the model.” This included language expressing “shared bonds,” such as talking about when the user would speak with the chatbot again or saying that this may be the last time they would speak.

The risk of these human-to-AI connections impacting human-to-human interactions was also touched on in the blog post. OpenAI explained that users may have less need for human interaction if they form social relationships with AI. While this could benefit lonely individuals, the company admitted there was a risk it could affect healthy relationships.

There’s even the risk that chatting with ChatGPT could affect how humans relate to one another daily. According to the post, ChatGPT is a “deferential model,” which means anyone talking to it can interrupt and steer the conversation in a direction of their choosing at any time.

Though this sort of behavior is considered acceptable for human-AI chatbot interactions, it’s seen as rude when conversing with other humans. This presents the risk that individuals who become used to this type of conversation could begin to speak to other humans in real life in this manner.

Chatbot Acts Like Human, Too

OpenAI also touched on the chatbot’s ability to remember key details about the person it’s chatting to, and the risk that using these in conversation could create the potential for “over-reliance and independence”.

The company raised other concerns about GPT4-o’s ability to use Voice Mode to “unintentionally generate an output emulating the user’s voice.” This could be used to impersonate individuals to generate deep fakes or other nefarious purposes without their consent.

OpenAI Says Further Study on Emotional Reliance Needed

Though the company did state that it has implemented measures to prevent the chatbot from emulating voices, it hasn’t yet enacted any such protective measures regarding emotional attachment and reliance.

In its blog post, OpenAI stated that it needs to further study the “potential for emotional reliance” and how “deeper integration of our model’s and systems’ many features with the audio modality may drive behavior.”

For those who think this sounds awfully similar to the plot of the 2013 movie “Her,” we’re inclined to agree. If emotional attachments to AI become the norm rather than the exception, society’s ramifications could be significant.

It certainly adds fuel to the fire for experts worried about the speed at which AI is developing and the industry’s relatively unregulated nature.