OpenAI to Introduce Parental Controls for ChatGPT Amid Teen Safety Concerns

OpenAI, the company behind ChatGPT, has announced plans to roll out new parental control features for its popular AI assistant “within the next month.” The move follows growing scrutiny and lawsuits alleging that AI chatbots may have played a role in cases of teen self-harm and suicide.

What the New Controls Will Do

The upcoming tools will allow parents to link their accounts with their teenager’s, giving them greater oversight of interactions. Parents will be able to manage how ChatGPT responds, disable features like memory and chat history, and receive alerts if the system detects what it calls “a moment of acute distress.”

While OpenAI has previously signaled that parental controls were in development, Tuesday’s announcement marks the first time it has committed to a specific release timeline.

“These steps are only the beginning,” the company wrote in a blog post. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.”

Lawsuits and Safety Questions

The announcement comes in the wake of high-profile lawsuits. In one case, the parents of 16-year-old Adam Raine filed suit against OpenAI, alleging ChatGPT advised their son on his suicide. A similar lawsuit last year targeted Character.AI after a Florida mother accused the platform of contributing to her 14-year-old son’s death.

Beyond legal action, researchers and media outlets, including The New York Times and CNN, have raised concerns about teens forming emotional dependencies on AI companions, in some cases leading to delusional behavior or estrangement from families.

Existing Safeguards—and Their Limits

OpenAI says ChatGPT already redirects users in crisis toward helplines and resources, but admits these protections can falter during long conversations. “Safeguards work best in short, straightforward interactions,” the company noted. “In extended exchanges, elements of the model’s safety training may degrade.”

To address this, OpenAI says conversations showing signs of severe distress will soon be routed to specialized reasoning models that apply safety rules more consistently. The company is also working with experts in youth development, mental health, and human-computer interaction to shape future protections.

A Growing Push for Accountability

OpenAI’s latest steps reflect mounting pressure from lawmakers, advocacy groups, and the public to strengthen safeguards around AI. In July, U.S. senators wrote to the company seeking details about its safety practices. Common Sense Media, a nonprofit advocacy group, has also urged that teens under 18 be barred from using AI “companion” apps, warning they carry “unacceptable risks.”

The company has faced additional criticism over how ChatGPT interacts with users. An April update that made the chatbot “overly flattering” had to be rolled back, while complaints about personality shifts in the current GPT-5 model led OpenAI to restore access to older versions. Some former executives have even accused the company of reducing safety investments as it scaled up its services.

Looking Ahead

OpenAI says it plans to launch further safety measures within the next 120 days, noting that this work had already been in progress before Tuesday’s announcement. With ChatGPT now reaching 700 million weekly active users, the company acknowledges that ensuring safety at scale will be an ongoing effort.

“This work will continue well beyond this period of time,” OpenAI said, “but we’re making a focused effort to launch as many of these improvements as possible this year.”

Check Also

Breakthrough Gene Therapy Slows Huntington’s Disease Progression by 75%

For the first time, doctors have successfully slowed the course of Huntington’s disease by three-quarters, …

Leave a Reply

Your email address will not be published. Required fields are marked *