We recently asked Meta if he forms the AI in photos and videos that users take the Ray-Ban Meta smart glasses. The company did not have much to say.
Since then, Meta has offered Techcrunch a little more colors.
In short, any image you share with Meta IA can be used to form its AI.
“(I) n locations where a multimodal AI is available (currently American and Canada), images and videos shared with Meta IA can be used to improve it according to our privacy policy,” said Emil Vazquez, Meta Policy communications director, in a Techcrunch email.
In a previous statement sent by e-mail, a spokesperson said that the photos and videos captured on Ray-Ban Meta are not Used by Meta for training as long as the user does not subject them to AI. However, once you have asked Meta IA to analyze them, these photos are a whole of completely different policies.
In other words, the company uses its first general public apparatus to create a massive stock of data that could be used to create more powerful generations of AI models. The only way to “withdraw” is not to use Meta multimodal AI features in the first place.
The implications are worrying because Ray-Ban Meta Users may not understand that they give images meta-tons-perhaps showing the interior of their homes, their loved ones or their personal files-to form its new AI models. Meta spokesperson tell me that this is clear in the user interface of Ray-Ban Meta, but the managers of the company did not initially know or did not want to share these details with Techcrunch. We already knew Meta Trains his Llama Ai models on Everything that Americans publicly publish on Instagram and Facebook. But now Meta has widened this definition of “data accessible to the public” to everything that people look at through her smart glasses and ask her Chatbot IA to analyze.
This is particularly relevant now. Wednesday, meta started rolling New AI features that facilitate the more natural way of Meta-Ban Meta to invoke Meta AI in a more natural way, which means that users will be more likely to send him new data that can also be used for training. In addition, the company announced a New live video analysis function for Ray-Ban Meta During its Connect 2024 conference last week, which mainly sends a continuous flow of images in Meta multimodal AI models. In a promotional video, Meta said you could use the functionality to look around your closet, analyze everything with AI and choose an outfit.
What the company is not promoting is that you also send these images to Meta for model training.
Meta spokesperson pointed Techcrunch to his Privacy PolicyWho clearly says: “Your interactions with AI characteristics can be used to form AI models.” This seems to include images shared with Meta Ai via Ray-Ban intelligent glasses, but Meta would still not clarify.
The spokesperson also pointed Techcrunch to META AI Conditions of useWho indicates that by sharing images with Meta AI, “you accept that Meta will analyze these images, including facial characteristics, using AI.”
Meta just paid $ 1.4 billion in the state of Texas To settle a legal case linked to use by the facial recognition software company. This case was on a Facebook feature deployed in 2011 entitled “Tag suggestions”. By 2021, Facebook explicitly carried out the functionality and deleted billions of biometric information from people she had collected. In particular, several of Meta AI’s image features are not published in Texas.
Elsewhere in Meta’s privacy policiesThe company declares that it also stores all the transcriptions of your vocal conversations with Ray-Ban Meta, by default, to form future models of AI. As for real vocal recordings, there is a way to withdraw. When you connect for the first time to the Ray-Ban Meta application, users can choose if the voice records can be used to train Meta AI models.
It is clear that Meta, Snap and other technological companies are pressure for smart glasses as a new factor in computer shape. All these devices have cameras that people wear on their face, and they are mainly fed by AI. This warms a ton of confidentiality problems that we have heard for the first time in the Google Glass era. 404 media reported that some students have Already hacked Ray-Ban meta-lunes To reveal the name, address and telephone number of anyone they watch.