Just after the IMA image of the AI of Ghibli-A style started to stop, Chatgpt and similar tools have found a new way of encouraging people to download their selfies in their system-this time, to make an action version of themselves.
The drill is always the same. A photo and a few prompts are sufficient for the AI image generator to transform you into a Barbie style doll wrapped with accessories related to your work or your interests right next to you. The last step? Share the results on your social media account, of course.
I must admit that the more I scroll the flows filled with photos of ai dolls, the more I worry. It is not only because it is yet another trend abusing the power of AI. Millions of people have agreed to share their faces and sensitive information simply to jump into the umpteenth social train, probably without thinking about the risks of confidentiality and security that accompanies it.
A confidentiality deception
Let’s start with obvious confidentiality.
Both the ai doll and Ghbli studio ai tendency have pushed more people to feed the Openai, Grok database and similar tools with their photos. Many of them may have never used LLM software before. I have certainly seen too many families download their children’s face to get the last viral image in the past two weeks.
It’s true; AI models are known for scratching the web for information and images. So, many have probably thought, how different is it to share a selfie on my Instagram page?
However, there is a catch. By voluntarily downloading your photos with AI generator software, you give the supplier more means to legally use this information – or, better still, your face.
🚨 Most people have not realized that the Ghibli effect is not only a controversy on the copyright of the AI, but also the Trart of Openai public relations to have access to thousands of new personal images; Here is how: To get their own Ghibli version (or Sesame Street), thousands of people are now voluntarily… pic.twitter.com/zbktScnoshMarch 29, 2025
As co-founder of AI, Tech & Privacy Academy, Luiza Jarovsky explainedGhibli’s trend has just exploded; By voluntarily sharing information, you give OPENAI consent to process it, Defacto bypassing “legitimate interest” GDPR protection.
In simple terms, in what Jarovsky described as a “intelligent confidentiality tip”, LLM libraries have managed to obtain a spur of new fresh images in their systems to be used.
We could say that it worked so well that they decided to start again – and raise the bar.
Lose control – and not just your face
To create your personal action doll, your face is not enough. You must share information about yourself to generate the complete package. The more details the details, the more you look like your real you.
So, like that, people not only give their consent to AI companies to use their faces, but also a large amount of personal information that the software could not collect otherwise.
As an Eamonn Maguire, responsible for the security of the proton account (the supplier behind one of the Best VPN And Secure email Services on the market), underlines, sharing personal information “opens a box of pandora problems”.
Indeed, you lose control of your data and, above all, how it will be used. This could be to train LLM, generate content, personalize or more ads – it will not be up to you to decide.
Consult my new doll #Barbie ai 🤩 🙌🏾 🙌🏾 🙌🏾box includes: ✔️ The first Afro-Caribbean woman was elected to serve as a British minister for the first black woman to sit on the chairs by chairs findApril 11, 2025
“THE Detailed personal and behavioral profiles that tools like Chatgpt can create using this information could influence the critical aspects of your life – including insurance coverage, loan conditions, surveillance, profiling, information collection or targeted attacks, “Maguire told me.
Confidentiality linked to the way Openai, Google and X will use, or the evil, this data is only one side of the problem. These AI tools could also become a pot of honey from pirates.
As a general rule, the higher the amount of data, the higher the possibility of violations of the megadonts is high – and AI companies are not always cautious when securing their users.
Commenting on this, Maguire said: “Experienced Deepseek An important security period when their user prompt database has become accessible to the public on the Internet. OPENAI also had a security challenge when a vulnerability in a third-party library they used led to sensitive user data exposure, including names, email addresses and credit cards. “”
This means that criminals could exploit people’s face and shared personal information to create their action figure for malicious purposes, including political propaganda, Identity flightonline fraud and scams.
Worth the pleasure?
Although it is increasingly difficult to avoid sharing personal information online and staying anonymous, these viral tendencies of AI tell us that the implications of confidentiality and security may not be correctly considered by most people.
It doesn’t matter if using Encrypted messaging applications as the signal and WhatsApp continue to rise next to the use of Virtual private network (VPN) Software – Jumping on the last viral social train is more urgent than that.
AI companies know this dynamic well and have learned to use it to their advantage. To attract more users, to get more images and data – even better, everything above.
It is just to say that the boom of the figurines of Ghibli-Style and the action is only the beginning of a new border for a generative AI and its threat to privacy. I am sure that some more of these trends will implode among social media users in the coming months.
As Maguire de Proton points out, the quantity of power and data accumulating in the hands of some AI companies is particularly worrying. “There must be a change-before it is too late,” he said.