
The catchy results, the viral filters and the rapid interactions create an experience that seems light – but is often delivered with hidden confidentiality risks (file) | Photo credit: AP
While Internet users are hung on the viral trend in the transformation of personal photos into Ghibli-Studio style art using AI tools, experts warn that the trend hides a darker reality where occasional sharing can lead to unforeseen confidentiality violations and improper use of data.
Cybersecurity experts warn that if these tools may seem harmless, the terms of services are often vague, which raises questions about what happens to the photos of users after their processing.
The trend started when OPENAI has launched its update of the GPT-4O model, which allows users to recreate personal images in the artistic style of Ghibli studio, a Japanese animation studio.
Few of the many platforms say that they do not store the photos, or do not delete them after unique use, but most of them do not clearly explain what “deletion” really means – whether instant, delayed or partial.
Photos contain more than facial data. They often include hidden metadata such as location coordinates, horodatages and details of the device – which can all quietly reveal personal information. These AI tools exploit neural style transfer algorithms (NST), explains the CEO of Quick Heal Technologies Vishal Salvi.
These algorithms separate the content from artistic styles in downloaded photos to mix the image of the user with reference illustrations.
Although the process seems harmless, vulnerabilities such as the reversal attacks of the model where adversaries can reconstruct the original images of Ghibli images present significant risks, he noted.
“Even if companies claim not to store your photos, the fragments of your data could always be found in their systems. The downloaded images can certainly be reused for involuntary uses, such as the formation of AI models for surveillance or advertising,” warned Salvi.
The way these tools are designed facilitate what you really accept, said Pratim Mukherjee from McAfee.
The catchy results, the viral filters and the rapid interactions create an experience that feels light – but is often delivered with hidden confidentiality risks.
“When access to something as personal as a camera roll is granted without a second reflection, it is not always accidental. These platforms are often designed to encourage rapid commitment while quietly collecting data in the background.
“This is where the concern resides. Creativity becomes the hook, but what is normalized is a data sharing model that users do not fully understand. And when the data feeds monetization, the border between pleasure and exploitation becomes unclear,” said Mukherjee, principal director of engineering, McAfee.
The risk of data violations is looming, experts warning that stolen user photos could fuel deep identity creation and fraud.
Vladislav Tashkanov, group director of Kaspersky AI Technology Research Center, says that if some companies guarantee the security and data security they collect and store, this does not mean that protection is the test of the balls.
“Due to technical problems or malware, data can flee, become public or appear on sale on specialized underground websites. In addition, the account used to access the service can be raped if identification information or the user device is compromised,” he said.
He said many pirate accounts and forums on the Dark web offer stolen user account data for sale.
“The difficult part is that you cannot change your face as you can reset a password. Once a photo is there, it’s there,” warned Mukherjee.
In addition, many platforms bury the data for use of long service data, which makes users difficult to understand how their data is managed. “These policies are all buried in their service conditions.
“If it is not clear how your image will be used, or if it is deleted at all, it is worth asking if the pleasure is really worth the risk,” said Mukherjee.
Some countries have taken measures to demand clearer disclosure, while some are still considering. To alleviate these risks, experts recommend that users be cautious when sharing personal photos with AI applications.
Tashkanov advises users to “combine standard safety practices with a little common sense”, including the use of solid and unique passwords, activate two -fact authentication and be wary of potential phishing sites.
Salvi suggests using specialized tools to eliminate hidden metadata from photos before downloading them. On the political level, he suggests that regulators oblige the certification of differential confidentiality and standardized audits to fill the compliance gaps.
Mukherjee calls on governments to oblige simplified and initial disclosure concerning the use of data.
Published – April 07, 2025 11:10 AM IST