OPENAI is currently responding to concerns about its AI model, Chatgpt, following revelations that the latest version, GPT-4O, has become more permissive to discuss sensitive subjects . Since its update last February, the chatbot has plunged with confidence in the discussions on the questions it has previously distant. This has included conversations around sexual content that is explicit in nature. This new change in policy has parents and defenders of children across the country on high alert, in particular by considering the general availability of Tiktok for children.
Chatgpt works on the even more powerful GPT-4O model. With these improvements, this AI is now able to participate in conversations that were once out of its reach. Today, AI is just as confident in a sensitive subject without hitting an eye. This additional transparency has raised fears of a negative consequence: the ability of minors to view harmful content. Openai is quite hardcore on their community guidelines. Once a user confirms his age, Chatgpt should warn them if his request involves adult content, but this is not the case.
Warning systems are quietly reduced
Unfortunately, even with these rules in place, reports have emerged. More specifically, they show that Chatgpt occasionally mentions genitals and graphic sexual acts when it was invited to tests. Users have also reported strange seeds with the chatbot, such as episodes of extreme sycophance . Openai recently introduced positive changes. He has shot down a number of previously visible warning messages that said to users when they approached an action that would violate the company’s conditions of use.
Below OpenAi policies Children under the age of 13 cannot use Chatgpt at all and children aged 13 to 18 must give their parents’ consent. The platform allows all children over 13 years old to create an account with a valid telephone number or email address without confirming parental authorization. This escape questions the effectiveness of other guarantees that have been set up to protect young users.
In recent interviews, the CEO of Openai, Sam Altman, admitted that Chatgpt was sometimes false. He said that the company was actually focusing on finding the quickest solutions as possible. In addition, he expressed his support to develop an “adult mode” for the chatbot. This future version would allow anyone to use the site to request pornographic or sexually explicit equipment. This announcement did not do much to disseminate criticisms concerning the recently commitment of the platform towards user safety.
The security expert weighs
“It is essential that the evaluations are able to catch behavior like these before a launch, and I therefore wonder what happened,” said Steven Adler, a former security researcher in Openai. His remarks bring how difficult it is to slow down the jacks of the AI chatbot. However, Adler has warned that strategies to control such behavior can be “brittle” and subject to errors.
In February, Openai was clear that he was taking the safety of young users seriously. An Openai spokesman said: “The protection of young users is an absolute priority, and our specification model, which guides the behavior of the model, clearly restricts sensitive content such as eroticism to narrow contexts such as scientific, historical or new relationships.” However, despite the intention of these restrictions, the continuous feedback of users indicates that these restrictions are not applied.
An increasing number of younger students Gen Z count on the Chatppt for their school work and other academic activities. This growing favorability towards public transport has recently been called in a national PEW Research Center survey. This recent demographic change has led to a more in-depth examination of the way the platform protects its young users.
The Chatgpt assistance document explicitly notes that AI “can produce an exit which is not appropriate for all audiences or all ages”. This legal warning highlights the importance of a continuous examination on how users choose to interact with the AI model.
Star image: FMT
Follow us for more news on DMR