You may have seen the Viral doll trend This is around where you can create a toy version of yourself using AI, with accessories that represent your interests, then share it on social networks.
There was also the trend of the Ghibli studio where you can create personalized images allegedly in the style of the animation house, and before that, the AI TREND DIRECTORY. You can even share children’s photos in order to see what they might look like when they are older.
The reports suggest that Chatgpt, owned by Open IA, saw a record number of users this year thanks to the deployment of its image generator, which prompted The owner of the technological company Sam Altman to ask people to “please relax” Because its graphic processing units found it difficult to meet demand.
With such tools gaining huge popularity, experts warn against downloading small photos.
In a new coil, Dr Madhumitha Ezhil – which executes the Instagram Parent Screenfree account – has opened how the downloading of children’s photos on AI tools “feels harmless”.
But she added that when we do this, we give an AI company “our child’s face – to store, study and learn”.
“And now, they could be able to predict with precision what a child can look like in the future,” she added, “and it is not only impressive, it is also very dangerous. Their faces can be used to form facial recognition systems, build strangely realistic depth and even be sold to an unknown third party.”
Dr. Ezhil said in his video: “We are the first generation to raise children in the AI era – and it is my personal opinion that it is better to be wrong on the side of caution because once we have downloaded their data, we may never recover it.”
HuffPost UK contacted Open IA about Dr. Ezhil’s concerns and they refused to comment.
Is this really what’s going on?
Chatgpt users have a control over how data is used, as Dr. Ezhil has mentioned in The legend of his video. There are self-service tools so that people can access, export or delete personal information, and you can withdraw from your content to improve and train AI models.
HuffPost UK also understands that the AI platform does not actively seek personal information to train its models. Thus, the public information available on the Internet is not used to create profiles on people or sell their data. The same cannot be said to However, other models of AI.
Chatgpt does not allow photorealist changes to children’s images, but people can download children’s images in the tool (I was able to download a baby’s baby photo and ask the tool to generate an image of what they could look like 25 years, which he could do in a few minutes).
Dr Francis ReesA Lecturer in Law who heads Children’s influencer projecttold HuffPost UK: “If you think about putting something on Chatgpt, you give him a lot of information-you give him a facial recognition capacity and the identity of the child, but also everything in their history such as uniform school crests, pets, room, house number, this type of information, as well as any metalcology of the GPS or the phone itself.
“Parents may not understand that this is what happens when they feed it in the machine, effectively.”
She added that “even when the children would be old enough to consent, it doesn’t really matter because the child would not be aware of this risk range for themselves and in the future”.
“The AI will be to store this, to form the machine and to train other installations to improve according to this data,” said Dr. Rees.
The case against “sharing”
Partying is the place where parents share photos, videos and details of their children’s life online – often on social networks, where they could have a number of followers they don’t even know.
Images of shared children shared online can be used to create sexually explicit deep buttocks – false audio, images and videos that have been generated or manipulated using an AI, but look like authentic content.
According to Internet Matters, 13% of adolescents had experience with a naked depth – And these are the ones we know. Many parents will simply not know if the content of their child has been taken and used in a harmful manner.
Images – whether real or false – can then be used for Intimidate or victims of blackmail. There is also the risk of identity fraud in the future.
A year ago, Deutsche Telekom shared a powerful campaign featuring an AI Deepfake of a girl called Ella, who showed the consequences of sharing photos of children on the Internet.
In the clip, Ella (an AI simulation of a young girl’s car) sat and explained to her parents how her identity could one day be stolen or used to commit crimes, for which she could go to prison. We also see how innocent photos of her on the beach as a child are shared in a sinister manner on the dark canvas.
At the time, the campaign was welcomed by viewers who grew him “alarm clock” and saying “everyone needs to see this”.
Average A five-year-old child with 1,500 photos of themselves onlineThe company behind the announcement said it wanted “Help parents protect their children’s privacy and minimize digital risks.”
Should we not put photos of children via AI and online, stop complete?
In the end, parents are the guardians of privacy for their children – so it’s your choice. But it is wise to be aware of the risks.
“I think there is simply no informed consent there,” said Dr. Rees to share children’s images to AI models. “Because children, even if they said they agreed with that, they would not have the ability to understand the ramifications.
“So parents as a guardian of privacy must be aware of it. Think of: Why are you posting? What are you doing? And what harm could he do?
“Who needs a photo of my child? Who needs to know my child’s room? Who needs to know my child’s pet? Why could I share this on an open public platform or feed him in a machine where it could be shared … I think it is an important consideration for parents to take a beat and to really wonder why they would be doing this.”
For my part, I take this message on board.