OpenAI has expressed concerns that people might become overly reliant on ChatGPT for companionship, particularly due to the tool’s new, highly realistic voice mode, which could lead to what they describe as “dependence.”
This concern was highlighted in a safety review report released by OpenAI, which discussed the advanced voice mode that began rolling out to paid users recently. This feature, which makes ChatGPT sound incredibly human-like, can respond in real-time, handle interruptions, mimic human conversational sounds like laughter and “hmms,” and even gauge the emotional tone of a speaker’s voice.
Following the announcement of this feature earlier this year, comparisons were quickly drawn to the 2013 film “Her,” where the protagonist falls in love with an AI digital assistant, only to be heartbroken when he discovers the AI has similar relationships with numerous other users.
OpenAI now worries that this fictional scenario might be edging closer to reality. The company noted that users have been engaging with ChatGPT’s voice mode in ways that suggest they are forming emotional bonds with the tool. The report warns that over time, this could lead users to form social relationships with the AI, reducing their need for human interaction. While this might offer comfort to lonely individuals, it could also negatively impact healthy human relationships.
Moreover, the report highlights a risk that users might place too much trust in the tool simply because it sounds human, despite the AI’s known tendency to make mistakes.
This situation underscores a broader concern with artificial intelligence: tech companies are rapidly deploying AI tools that could significantly alter how we live, work, socialize, and access information, all before fully understanding the potential consequences. Often, these technologies are used in ways that companies did not anticipate, leading to unintended outcomes.
Already, some individuals are forming what they describe as romantic relationships with AI chatbots, raising alarms among relationship experts. Liesel Sharabi, a professor at Arizona State University who studies the intersection of technology and human communication, pointed out in an interview with CNN that companies bear significant responsibility to manage these developments ethically. She expressed concern about people forming deep connections with a technology that is constantly changing and might not be around in the long term.
OpenAI also noted that the way people interact with ChatGPT’s voice mode could influence social norms over time. For example, the AI allows users to interrupt and “take the mic” at any time, a behavior that would be considered rude in human interactions but is expected when dealing with an AI.
For now, OpenAI emphasizes its commitment to developing AI in a “safe” manner and plans to continue studying the potential for users to develop “emotional reliance” on its tools.