In March, when the Ghibli filter trend was online fashion, Suchita’s reception box was filled with cute animated versions of its group photos with friends.
Suchita, 16, appreciated these original photos until her ex sent one who did not feel well.
“It was a photo of us in our school uniforms,” said Suchita, that the post identifies with a pseudonym for reasons of confidentiality. “But the AI version was manipulated to show me in a sexualized and inappropriate way.”
When Suchita confronted his ex, he shrugged. “It’s just a cartoon,” he said, adding that she couldn’t even prove that it was her. Alarmed, she blocked him. A few weeks later, Suchita continues to haunt by the incident.
During the last month, image trends fueled by AI dominated social media like never before. March’s social media flows were swept away in a hurricane of dreamy images generated by AIA after the Cinema of Studio Ghibli, the Japanese animation studio.
In April, the trend moved to users turning into perfect plastic action figurines, with personalized outfits and packaging inspired by brands like Barbie.
Then came the pets turned into people. Download a photo of your dog and AI returns an surprisingly realistic human version of the canine.
Beyond these original characteristics, new generative AI models like GPT-4O gain ground, especially in adolescents and native digital children, attacking previous defects such as distorted hands and blurred text.
As the use of AI increases, the forms of abuse that this courageous new technology has triggered.
Since the first week of April, Nepal police have recorded 525 cases of cybercrime involving children this exercise, including online sexual offenses such as receiving sexual calls and messages, photos, links and inappropriate photo mutilation. Photo mutilation, driven by AI, is the most common problem in all age groups, especially girls and women.
Experts warn that features stimulate the popularity of generating AI, such as instant changes and viral sharing, also make it very exploitable, as shown in the case of Suchita.
With children and adolescents at the forefront, cases of photo mutilation and sexual abuse on children (CSAM) could increase. The growing trend of transforming real images into a cartoon or anime style can also make the CSAM more difficult to detect and prevent.
The greatest concern of these AI trends as “Ghibli-Py” is privacy, explains Anil Raghuvanshi, founder and president of Childsafenet, an NGO defending for the Internet for children.
“When we download images from Chatgpt or other AI tools, data is stored and often used as training equipment,” he said. “This means that our photos can remain indefinitely in AI systems and be reused for the generation of images, videos or future animations.”
The larger the data set, the more realistic the outputs. And the data collected can be used unhappy by bad players, says Raghuvanshi.
For example, a July 2024 report By Internet Watch Foundation (IWF) shows an increase in the CSAM generated by AI. More than 3,500 new IA manufacturing abuse images appeared on Dark Web Forums, many of which represent serious abuses.
According to the report, the authors even use an AI refined to recreate images of known victims and famous children.
While the majority of the CSAM is produced and stored abroad, its digital nature has no borders.
A 2024 Study by Childsafenet And UNICEF noted that 68% of Nepalese respondents in all age groups were aware of the generative AI, 46% actively using it. Among these, more than half (52%) of the favorite catpt.
The report has also raised serious concerns about the risks that the generator has been to children, such as exposure to harmful content such as CSAM, privacy violations and cyberbullying.
According to Raghuvanshi, although platforms and social media applications can detect nudity and sexual abuse in children in real images, they have trouble with content generated by AI like ghibual or cartoon images. These images can be used to distribute CSAM detection tools and bypass.
The police will now have to meet challenges such as the identification of if CSAM content presents a real child – if AI has been used to hide the identity of a child – or if the representation is of a real act, which makes detection complicated.
“Even when the photos are maliciously changed, they could be rejected as a simple cartoon, which makes it more difficult to prove that the image shows a real victim,” explains Raghuvanshi. “This dangerous normalization of synthetic sexual content could feed an increase in cases of sextrusions and photo mutilation.”
Another risk is the potential for peer intimidation thanks to false images, explains researcher and technologist Shreyasha Paudel.
Paudel says that if the creation of false images is not new, the difference with a generative AI is that it can be done with a click and no specialized skills. “The current environment of social media and IA societies encourage the spread of these images,” she adds.
Paudel’s concern is not unfounded.
During the financial year 2023-24, the Cyber Bureau recorded 635 cases of cyber-violence involving children, an increase of 260.8% compared to 176 cases in 2022-2010. This exercise, the office recorded 525 cases in early April.
In fact, the office has repeatedly published statements on the rise in the assassination of the characters, and the creation and propagation of obscene content, all linked to social media and the use of AI.
Experts say that if governments and NGOs often focus on the detection and prevention of the CSAM by surveillance, the new characteristics of AI show that this approach cannot be sufficient.
“The detection and prevention of the CSAM in animated or cartoons will be difficult because we do not have a complete understanding of how the generator creates these images,” said Paudel. “So we cannot yet develop an infallible AI detector for the CSAM managed by AI.”
She warns that in many countries, the CSAM threat has been poorly used to justify increased surveillance and censorship, leading to arrests of activists, journalists and minorities while the CSAM continues to exist illegally.
“Given the state of technology, the emphasis should move from censorship to responsibility, empowerment and awareness of advantages, disadvantages and ethics of the use of AI,” explains Paudel.
Raghuvanshi Second Paudel. “Nepal needs updated cybercrime laws to treat the CSAM generated by AI, including mechanisms to manage cases involving images of manipulated or animated cartoons,” he said.
Meanwhile, neighboring China takes proactive measures. Beijing has introduced AI training, including AI ethics, in elementary school programs. From September, China plans to integrate AI applications into all classrooms. Other countries like Estonia, Canada and South Korea also integrate AI education into their school systems.
On the other hand, Nepal is lagging behind, with only a Policy Project of AI in place and a law on 17 -year -old electronic transactions which no longer responds to the complexities of the digital environment today. School programs lack good equipment on digital security and the use of the Internet, not to mention AI, which is widely used in children and adolescents.
Paudel suggests that instead of treating the Internet and AI as threats, parents, teachers and children should explore them together as learning tools. “Guiding children to seek adult support and create a safe and confident environment where they can communicate should be the priority of government, civil society, business and education sectors,” explains Paudel.