The Openai Chatgpt can write, rewrite, paraphrase any guest text. Although the content generated by AI saves time for many, the tool is also sadly famous for students and learning. Since its creation, the Openai Chatppt has been a concern. The debate on students using artificial intelligence for cheating has been widespread. But what happens if Openai says that there is a way to detect texts written by its artificial intelligence? OPENAI has a method that can detect when someone uses chatgpt to write. According to the Wall Street JournalThe Savior tool has been ready to be released for about a year now. But it seems that Openai is not ready to show the green light for the moment.
The report suggests that the delay in the release of this tool is only to attract and retain users. An investigation that the company conducted with the faithful that Chatgpt users have found nearly a third party would be disabled by anti-shirt technology. The Center for Democracy and Technology, a non -profit organization on technological policy, also found that 59% of college and secondary teachers were sure that some students had used AI to help at school work, up 17 points of the previous school year.
Wall Street Journal The report quoted an Openai spokesperson, who said that the decision to maintain the anti-Chie tool under Wraps is because it has some risk factors and is complex. The launch will probably have an impact on the wider ecosystem beyond the Openai, given the complexities.
Openai anti-tricher tool
Openai’s anti-trickered tool modifies the way Chatgpt selects words or fragments of words (tokens) to generate text. This modification would introduce a subtle model, known as filigree, in the text generated, allowing cheating or potential use detection.
Filigranes, although undetectable for humans, would be recognizable by Openai detection technology. It attributes a score to indicate the probability that a document or a section has been generated by Chatgpt. This score would serve as an indicator of the probability that the content was created by the AI model.
Internal documents reveal that the watermark technique is almost impeccable, reaching an efficiency rate of 99.9%. But this imperfection is only when Chatgpt produces a substantial quantity of new text, allowing precise identification of the content generated by the AI.
Even then, the concerns that the filigranes can be erased by other techniques such as having Google translate the text in another language, then behind, or having a chatppt add emojis to the text, then delete them manually standing.
But the main problem, according to the report, is, if and when released, which should be able to use the tool. If too few people have it, the tool would not be useful. If too much have access, the bad players could decipher the company’s watermark technique.
But it’s just for the text. OPENAI has released the ai detection tool when it comes to images and audios.openai has focused on developing watermarking technologies for audio and visual content over text due to the potentially more severe consequences of ai-generated multimedia content, such as deepfakes, compared To text – Based content.