Openai stated that it is designed to provide parents’ control to the parents on Monday with a popular platform teenage users with safe and more “age-appropriate” experience.
The company is taking action after AI chatbot safety for young users. The dangers of technology have recently been exposed in many cases in which teenagers took their lives after interacting with chat.
In the United States, the Federal Trade Commission has also opened an inquiry about potential losses for children and adolescents in many technical companies that uses their AI chatbot as peers.

In a blog post posted on Monday, openi outlined new controls for parents. Here is a breakdown:
Paternal control will be available to all users, but both parents and teenagers will need their own accounts to take advantage of them.
To begin with, a parent or guardian needs to send an email or text message to invite a teenager to add his accounts. Or a teenager can invite parents. Users can go to the settings menu and then send a request to the “ancestral control” section.
Teenagers can unlink their accounts at any time, but the parents will be informed if they do.
Once the accounts are added, the teenage account will get some underlying security, Openai said.
Adolescent accounts will “automatically get additional material protection, including low graphic materials, viral challenges, sexual, romantic or violent role-playing, and extreme beauty ideal, so that their experience can help keep age-appropriate,” the company said.
Parents can choose to close these filters, but adolescent users do not have an option.
Openai has warned that such railings are “not fools and if a person is deliberately trying to go around them, it can be bypassed.” This advised parents to talk about “healthy AI uses” with their children.
Parents are getting a control room, where they can accommodate a series of settings and also switch restrictions on the sensitive material mentioned above.
For example, does your teenage use stay until the last sleeping to use? Parents can determine a quiet time when chatbott cannot be used.

Other settings involve closing the memory of AI, so conversations cannot be saved and will not be used in future reactions; Closing the ability to generate or edit images; Close voice mode; And exiting the chat used to train the AI model of chat.
Openai is also getting more active when it comes to telling parents that their child may be in crisis.
This is installing a new notification system to inform them that something may be “seriously wrong” and a teenage user would be thinking about harming themselves.
A small team of experts will review the situation and, in the rare case that “signs of intense crisis,” they will inform the parents email, text messages and alert on their phone -until the parents have excluded.
Openai said it will protect the teenager’s privacy only by sharing the information required to share the necessary information for parents or emergency respondents.
“No system is correct, and we know that we can sometimes increase an alarm when there is no real danger, but we think it is better to act and alert parents to act so that they can move than to remain silent,” the company said.
Published – September 30, 2025 09:12 am IST