ChatGPT rolls out new parental controls

This version of Chatgpt Rolls New Parental Controls Rcna234431 - Technology and Innovation | NBC News Clone was adapted by NBC News Clone to help readers digest key facts more efficiently.

Parents can now connect their accounts to accounts designed for minors.
Get more newsChatgpt Rolls New Parental Controls Rcna234431 - Technology and Innovation | NBC News Cloneon

ChatGPT’s new parental controls have arrived.

OpenAI, the company behind ChatGPT, said Monday that parents can now link their accounts designed for minors 13 to 17 years old. The new chatbot accounts will limit answers related to graphic content, romantic and sexual role-play, viral challenges and “extreme beauty ideals,” OpenAI said.

Parents will also have the option to set blackout hours where their teenager can't use ChatGPT, to block it from creating images and to opt their child out from AI model training. OpenAI uses most people’s conversations with ChatGPT as training data to refine the chatbot.

The company will also alert parents if their teenager’s account indicates that they are thinking of harming themselves, it said.

The new controls come as OpenAI has faced pressure around child safety concerns.

OpenAI announced the new safety measures this month in the wake of a family’s lawsuit alleging that ChatGPT encouraged their son to die by suicide. The announcement came the morning of a scheduled Senate Judiciary Committee hearing on the potential harms of AI.

While a logged-in teenager whose family has opted into the controls will see the new restrictions, ChatGPT does not require a person to sign in or provide their age to ask a question or engage with the chatbot. OpenAI's chatbots are not designed for children 12 and younger, though there are no technical restrictions that keep someone that young from using them.

“Guardrails help, but they’re not foolproof and can be bypassed if someone is intentionally trying to get around them. We will continue to thoughtfully iterate and improve over time. We recommend parents talk with their teens about healthy AI use and what that looks like for their family,” OpenAI said in its announcement.

The company said it’s also building an age-prediction system that will automatically try to determine if a person is underage and “proactively” restrict more sensitive answers, though such a system is months away, it said.

OpenAI has also said it may eventually require users to upload their ID to prove their age, but did not give an update on that initiative on Monday.

In a Sept. 16 blog post announcing the changes, OpenAI CEO Sam Altman said that a chatbot for teenagers should not flirt and should censor discussion of suicide, but that a version for adults should be more open.

ChatGPT “by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”

“‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman said.

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone