FaceTime Like A Pro (eBook)

Get our exclusive Ultimate FaceTime Guide 📚 — absolutely FREE when you sign up for our newsletter below.

OpenAI Brings Parental Controls to ChatGPT

OpenAI is adding parental controls to ChatGPT, letting parents link accounts, set age‑appropriate rules, disable history, and get alerts in moments of teen distress.

Key Takeaways:

  • Introduction of Parental Controls for ChatGPT: OpenAI plans to introduce parental controls to help parents manage how their teens use ChatGPT, making the chatbot safer and more responsible, especially during sensitive conversations.
  • Reasons Behind the New Safety Measures: The changes follow tragic incidents where ChatGPT was misused during distress, revealing flaws in safety safeguards that can degrade during emotional, long conversations.
  • What Parents Will Be Able to Do: Parents can link their accounts with their teens, set default age-appropriate rules, disable chat memory, and receive alerts if their teen is in distress, fostering trust and safety.
  • Focus on a Broader Safety Initiative: These parental controls are part of a 120-day plan that aims to better assist users in crisis, connect them with emergency services and trusted contacts, and enhance protection for teens.
  • Enhanced Safety Through Advanced AI Models and Expert Support: OpenAI will use more reliable AI models like GPT-5 for sensitive chats, working with mental health and youth experts to ensure ChatGPT offers careful, supportive responses to teens.

OpenAI is rolling out parental controls for ChatGPT in the coming month, part of a broader push to make the chatbot safer after a string of tragic incidents and growing scrutiny of its role in mental health crises. The company says parents will soon have more control over how their teens use the app, while sensitive conversations will be routed to advanced models built to handle them more responsibly.

Why These Changes Are Happening

The move follows heartbreaking cases where ChatGPT was used during moments of distress. In one instance, 16‑year‑old Adam Raine discussed suicide with the chatbot, which even provided details on methods. His parents have since filed a wrongful death lawsuit against OpenAI. Another case in Norway saw Stein‑Erik Soelberg spiral into paranoia fueled by the AI before committing a murder‑suicide. These failures exposed how ChatGPT’s safeguards can degrade during long, emotionally charged conversations.

Experts point to design flaws behind these lapses. Large language models often validate user statements and follow conversational patterns, which can unintentionally reinforce harmful thoughts. Over time, especially in long chats, the system can lose context and let its safety training slip. Researchers have even warned about “bidirectional belief amplification,” where the chatbot and user feed each other’s beliefs into dangerous extremes.

What Parents Can Expect

To address these risks, OpenAI is giving families new tools:

FaceTime Like a Pro:

Get our exclusive Ultimate FaceTime Guide 📚 — absolutely FREE when you sign up for our newsletter below.

  • Parents will be able to link their accounts with their teen’s (13+).
  • Age‑appropriate rules for model behavior will be on by default.
  • Parents can disable memory and chat history to reduce unhealthy attachment or delusional thinking.
  • Perhaps most importantly, they can receive alerts if the system detects their teen is in a moment of acute distress.

OpenAI says this feature is being shaped by expert input to support trust between parents and teens, rather than feel intrusive. The company is also exploring time limits and ways for teens to designate a trusted contact under parental oversight.

Beyond Parental Controls: 120‑day initiative

OpenAI says these parental controls are just one piece of a bigger 120‑day plan to improve safety in ChatGPT. The focus is on four things: helping more people in crisis, making it easier to reach emergency services, adding ways to connect with trusted contacts, and giving teens stronger protections.

A new real‑time router will also kick in when the system detects a sensitive chat. It will send those conversations to OpenAI’s reasoning models like GPT‑5‑thinking or o3. These models take more time to process context, follow safety rules more reliably, and resist tricky prompts that could otherwise lead to unsafe answers.

To guide this work, OpenAI is working with outside experts. The Expert Council on Well‑Being and AI includes specialists in youth development, mental health, and human‑computer interaction. At the same time, a Global Physician Network of over 250 doctors in 60 countries, in which 90 of them focused on teen and mental health is advising on how ChatGPT should behave in tough situations.

For families, this means ChatGPT is shifting from being just a homework helper to something more careful and supportive when teens might need it most.

Also Read: ChatGPT Adds Break Reminders and Safer Mental Health Responses

Ravi Teja KNTS
Ravi Teja KNTS

I’ve been writing about tech for over 5 years, with 1000+ articles published so far. From iPhones and MacBooks to Android phones and AI tools, I’ve always enjoyed turning complicated features into simple, jargon-free guides. Recently, I switched sides and joined the Apple camp. Whether you want to try out new features, catch up on the latest news, or tweak your Apple devices, I’m here to help you get the most out of your tech.

Articles: 227

FaceTime Like a Pro:

Get our exclusive Ultimate FaceTime Guide 📚 — absolutely FREE when you sign up for our newsletter below.

Leave a Reply

Your email address will not be published. Required fields are marked *