Ray-Ban meta-lunettes serve as hands-free camera, but technology will soon allow users to ask Meta’s IA on what he sees in real time. This week, Meta, the Facebook parent company and Instagramannounced the update of V11 software for Ray-Ban meta-lunes and tests of a live AI function. Meta’s announcement comes only a few days after Chatgpt began a slow deployment of the advanced vocal mode assisted by the camera and Google gave developers access to Gemini 2.0 with Google Lens capacities.
Early beta testers for meta-generation glasses I will now be able to try the live features of the AI teasing for the first time for Connect 2024. Live AI allows Meta’s IA to see what the carrier sees, allowing AI to answer questions on objects or places in the field of vision of the cameras. Meta says that features fuel like getting help and inspiration for cooking, gardening or travel in new places.
During a live session on the AI, users will also be able to ask questions with the “Hey Meta” preface. And, while the interruption of a human can be rude, Meta says that AI can be interrupted for follow -up issues or even to completely change the subject.
The beta deployment of Meta’s live AI has occurred while several platforms take place to develop a multimodal AI for real-time intelligent assistants. A multimodal AI can accept more than one type of input, such as using both the voice and a camera flow to ask questions about objects.

Last week, Chatgpt also started to deploy a feature that allows the application to use the camera flow or a smartphone screen view to ask and answer questions. Teased for the first time in May, Advanced vocal mode with video Allows subscribers to point their camera on an object to ask questions. The update Also gives the AI a view of the screen to ask questions about what is also going on on the screen at the time.

Earlier this month, Google launched Gemini 2.0who can use Google Lens with Gemini Live, and started Test a version based on GEMINI 2.0 glasses. Google first demonstrated Gemini 2.0 during a live event in May, including several demos for Project Astra using a smartphone camera to ask AI questions on objects. However, during this demo, Gemini suggested Open the rear of a movie camera to repair a stuck rewind leverAdvice that would ruin all the images on the film inside. This error has opened criticism and skepticism on AI’s ability to answer technical questions.
Multimodal AI could rationalize interactions with smart assistants. Giving access to AI to a flow of cameras could ignore a lot of input or even ask questions about something you don’t know about words. But, as illustrated by the advice on the defective gemini film, early AI can be subject to errors.
While Meta’s live AI is in testing, current owners of intelligent nuances can download V11 to get the identification of hands-free songs with Shazam, a functionality available in the United States and Canada. Meta also tests the translation live, starting with the English spoken, in the United States and Canada.
You can also love
For more information, browse the Best camera glassesor read The best IA generator software.