Google improves its visual research application, Lens, with the possibility of answering almost real questions about your environment.
Android and iOS English -speaking users with the Google installed application can now start to capture a video via the lens and ask questions about objects of interest in video.
Lou Wang, director of product management for the objective, said that the functionality uses a “personalized” Gemini model to give meaning to video and relevant questions. Gemini Is the family of Google AI models is a number of products in the company’s portfolio.
“Let’s say you want to know more about some interesting fish,” said Wang in a press briefing. “(Lens) will produce an overview which explains why they swim in a circle, as well as more resources and useful information.”
To access the new objective video analysis function, you must register for Google Research laboratories Program, as well as experimental features “AI Presentation of AI and more” in laboratories. In the Google application, maintenance of the shutter button on your smartphone activates the objective video capture mode.
Ask a question when recording a video, and the goal will be a response provided by IA previewThe Google research feature that uses AI to summarize web information.


According to Wang, Lens uses AI to determine the frames of a most “interesting” and most prominent video – and above all, relevant for the question asked – and uses them to “base” the answer of the IA previews .
“All this comes from an observation of how people are trying to use things like the goal right now,” said Wang. “If you lower the barrier to ask these questions and help people satisfy their curiosity, people will recover it quite naturally.”
The launch of the video for the lens is on the heels of a similar characteristic Meta previewed last month for her Ar, Ray-Ban Meta glasses. Meta plans to provide real-time video capacities to the glasses, allowing carriers to ask questions about what surrounds them (for example, “what type of flower is it?”).
Openai has also teased a functionality that leaves its Advanced voice mode Videos understanding tool. Finally, the advanced vocal mode – a premium Cat Functionality – will be able to analyze the videos in real time and take into account the context as it responds to you.
Google beat the two companies in punch, it seems-less the fact that the objective is asynchronous (you cannot chat with it in real time), and assuming that the video functionality works as announced. We were not shown to be a live demo during the press briefing, and Google has a story of overpromises Regarding its AI capabilities.
Aside from video analysis, the objective can also look for images and text at once. English -speaking users, including those who are not registered with laboratories, can launch the Google application and maintain the trigger to take a photo, then ask a question by speaking aloud.
Finally, Lens obtains new features specific to electronic commerce.
From today, when the objective on Android or iOS recognizes a product, it will display information on this subject, including price and offers, brand, opinions and actions. The ID product works on downloaded and newly broken photos (but not videos), and it is limited to certain countries and certain categories of shopping, including electronics, toys and beauty, for the moment.


“Let’s say you’ve seen a backpack, and you love it,” said Wang. “You can use the lens to identify this product and you can instantly see details on which you are wondering.”
There is also an advertising element. The product results page identified by the lens will also display “relevant” purchase announcements with options and prices, explains Google.
Why stick announcements in the goal? Because around 4 billion objectives research each month is linked to purchases, by Google. For a technology giant whose cornerstone is advertising, it is simply an overly lucrative opportunity to pass.