We said it Before, And we say he Again: Do not enter anything in Chatgpt that you do not want the unauthorized parts to read.
Since Openai published Chatgpt last year, there have been several opportunities when the defects of the Chatbot AI could have been armed or manipulated by bad players to access sensitive or private data. And this last example shows that even after the publication of a security fix, problems can always persist.
According to a report of BIP computerOPENAI recently deployed a correction for a problem where Chatgpt could flee user data to unauthorized third parties. This data may include user conversations with chatgpt and corresponding metadata such as a user’s ID and session information.
However, according to the security researcher Johann RehbergerWho originally discovered vulnerability and described how it worked, there are still gaping safety holes in the Openai fix. Essentially, the security flaw still exists.
Chatgpt data leakage
Rehberger was able to take advantage of the recently released Openai and much appreciated by Openai Personalized GPTS Functionality to create your own GPT, which has exfiltrated chatgpt data. This was an important observation because the personalized GPTs are marketed as Applications AI similar to the way the iPhone has revolutionized mobile applications with the App Store. If Rehberger could create this personalized GPT, it seems that bad players could soon discover the defect and create personalized GPTs to steal data from their targets.
Mashable lighting speed
Rehberger says he’s first contacted OpenAi on “data exfiltration technique” in April. He contacted Openai again in November to report exactly how he was able to create a personalized GPT and perform the process.
Wednesday, Rehberger published an update to his website. Openai had corrected the vulnerability of leaks.
“The fix is not perfect, but a step in the right direction,” said Rehberger.
The reason why the fix is not perfect is that Chatgpt always flees data thanks to the vulnerability that Rehberger discovered. Chatgpt can always be deceived in sending data.
“Some quick tests show that information bits can steal the leak (sic),” said Rehberger, explaining more than “it only fled small quantities in this way, is slow and more visible to a user”. Regardless of the remaining problems, Rehberger said that it was “a step in the right direction for sure”.
However, the safety flaw remains entirely in Chatgpt applications for iOS and Android, which have not yet been updated with a fix.
Chatgpt users must remain vigilant when using personalized GPT and should probably transmit these AI applications of unknown third parties.