A new trend is gaining momentum online: users are testing new ChatGPT features to determine locations based on provided photos, TechCrunch reports.
This week, OpenAI introduced the o3 and o4-mini models, which can crop, rotate, and zoom in on images, even if they are blurry or distorted, for detailed analysis of visual cues. Combining these capabilities with web search makes OpenAI’s new models a powerful tool for identifying places.
Users on the X platform have noticed that o3 is particularly good at guessing cities, landmarks, and even individual restaurants or bars. They upload restaurant menus, street and facade shots to ChatGPT, and then ask the model to play GeoGuessr, a game where you have to guess the location based on Google Street View images.
However, privacy experts warn of potential risks: an attacker could take a screenshot of a person’s Instagram story and use ChatGPT to try to “doxy” it. However, when comparing o3’s models with GPT‑4o, the previous version without visual tools, it surprisingly guesses the correct location more often than not, although o3 can sometimes succeed where its predecessor failed.
In response to a TechCrunch inquiry, OpenAI stated:
"OpenAI o3 and o4-mini bring visual reasoning to ChatGPT, making it more helpful in areas like accessibility, research, or identifying locations in emergency response. We’ve worked to train our models to refuse requests for private or sensitive information, added safeguards intended to prohibit the model from identifying private individuals in images, and actively monitor for and take action against abuse of our usage policies on privacy."
However, experts believe that the emergence of increasingly "smart" AI models requires the introduction of technical restrictions and clear rules of use to prevent abuse and protect people's privacy.