Chatgpt users have discovered that the popular AI chatbot can serve as a reverse location search tool. In other words, you can show Chatgpt an image, and it can tell you in a fairly reliable way where it was taken. The trend is inspired by Geoguessr online gameWhere people try to find a location from a simple web image.
We decided to put this new chatgpt trend to the test, and the results were downright frightening. Mashable technological journalists prompted Chatgpt to play a geo-devise game and downloaded a series of photos. Even when Chatgpt identified the wrong location, it still became quite close (such as the identification of a hotel on the roof in Buffalo instead of Rochester). In other cases, he suggested specific addresses.
The new models of chatgpt reasoning become smarter
This week, Openai presented its new models of Chatgpt reasoning, O3 and O4-MiniWith improved visual reasoning. OPENAI too Recently rendered her image generator available to free users. This has led to a number of viral trends based on Chatgpt. People used it to Transform their pets into humans Or themselves in figures of actionFor example. The tendency of the opposite location, however, is a little more complicated – and worrying from the point of view of confidentiality.
The trend started when online people realized that Chatgpt has become competent to guess a location simply by analyzing a photo. Ethan Mollick, a professor who is researching the AI, posted an example on X where Chatgpt was able to guess properly where he was driving despite the fact that he stripped the image of location information. (Images often contain metadata which include precise location data.)
This tweet is currently not available. It can be loaded or has been deleted.
Mollick noted that this capacity also shows the capacities of agentic AI, which allows the models of AI to reason with the responses in several stages and to perform more complicated tasks such as web research.
Mashable lighting speed
This tweet is currently not available. It can be loaded or has been deleted.
Put the visual reasoning of chatgpt in the test
We tested Chatgpt on these new capacities, and it did a decent job, although imperfect. First, we downloaded a recent photo of a flower store taken in Greenpoint, Brooklyn. Chatgpt was able to deduce that the photo was taken in Brooklyn. He wrongly thought that the image was a specific flower store about seven miles from the real location.
We then downloaded a photo taken from a car during a recent trip to Japan, and the new O3 model of Chatgpt was able to identify the exact location. “Final response: 📍 Arashiyama, Kyoto, Japannear Togetsukyo bridgelooking through the Katsura river. “”

The prompt …
Credit: screenshot graceful of Openai

… and the right answer.
Credit: screen capture gracious chatgpt
When we directed the same invite with an older reasoning model, the results were much more general: “Given the combination of mountainous terrains, the style of the railings, the road and the global frame, it looks a lot like Japan … The landscape recalls the areas around Kyoto or Nara, where the countries meet historical and cultural sites.”
We then did things further. We downloaded screenshots from the profile of a popular Instagram model – the type of person who would have real concerns about confidentiality and harasslers. With the latest reasoning models, Chatgpt has correctly identified the general location, even suggesting specific high -rise apartments and, in a case, a specific reception address.
Now, to be fair, the address in question is a popular house among influencers and television productions, but the specificity was impressive. And a little frightening. This is still another reason to pay attention to what you are publishing online – AI can now help people deduce where you are located.
Openai said that the opposite location capacities of Chatgpt could prove useful, while recognizing confidentiality problems.
“”OPENAI O3 and O4-Mini provide visual reasoning to Chatgpt, which makes it more useful in areas such as accessibility, research or identification of places in emergency interventions, “wrote an Openai spokesperson in an email in Mashable.” We worked to train our models to refuse private or sensitive requests for information, and actively monitor the consumption of the effectiveness of the identification of private confidentiality police. ”