If you’ve ever used ChatGPT to look something up online, you probably assumed your conversation was private — even when the chatbot searched the web on your behalf. But a recent glitch proves that assumption might not always hold.
Earlier this month, developers began noticing something strange in their Google Search Console dashboards. Instead of short keyword-based queries, they were seeing full, human-like sentences — the kind you’d expect to type into ChatGPT, not Google. These oddly specific entries raised an eyebrow, then alarm. Unfortunately, this isn’t the first time this has happened.
What followed was a deeper investigation by analytics researcher Jason Packer and consultant Slobodan Manić. They traced the activity back to ChatGPT’s web browsing mode, discovering that a subset of conversastions had accidentally leaked into the public search infrastructure, and ended up being visible to unrelated website owners.
You may like
What caused the leak
(Image credit: Shutterstock)
First reported by Ars Technica, the issue was tied to a hidden “hints=search” tag used in some ChatGPT sessions. That tag told the chatbot to perform a real-time web search — but in doing so, parts of the user’s prompt were sometimes included in the resulting URL.
Since Google’s systems automatically scan and index anything that looks like a search term, it picked up those URLs (and the private text inside them) and logged them in site owners’ analytics dashboards.
As stated in the report by Ars Technica, OpenAI acknowledged the glitch, stating that it affected only “a small set of searches” and has since been fixed. However, the company didn’t say how long the issue persisted or how many users were impacted.
Why it matters
(Image credit: Shutterstock)
Although the leak didn’t expose passwords or personally identifiable information, it raises serious questions about how generative AI systems interact with public infrastructure like Google Search. ChatGPT’s browsing mode is designed to operate in real time, and in this case, it inadvertently used live search pathways that made private input visible.
This isn’t the first time ChatGPT data has ended up in places it wasn’t meant to. Earlier this year, users discovered that shared chat links were being indexed by Google. At the time, that was chalked up to users enabling a share setting. But this new glitch is more troubling since it didn’t involve any user action at all.
(Image credit: Future)
While OpenAI has fixed the routing issue, here are a few ways to protect your privacy when using any AI tool with web access:
- Avoid including sensitive data in prompts. Don’t share private info like addresses, names, or credentials — especially when browsing mode is enabled.
- Use private/incognito mode in your browser when testing third-party plugins or assistants that trigger external searches.
- Disable browsing features if you don’t need real-time search. In ChatGPT, you can select a non-browsing model under your settings or toggle options.
- Don’t assume anonymity. AI prompts may pass through external systems and small bugs can lead to unexpected exposure.
- Clear history regularly. Deleting past prompts from your chat history may not undo past leaks, but it can limit future risk.
As AI tools continue to blend with web infrastructure, even small routing bugs can have unexpected consequences. Being mindful of what you type — and how the tool is configured — is your best first line of defense.
The takeaway
This glitch goes beyond raising privacy concerns; it may have also messed with how websites track their traffic. While we still don’t know for sure how AI broswers are truly determining website traffic, there is evidence that AI is behind a weird pattern showing up in analytics, called the “crocodile mouth.”
This phenomenon is when a site sees a big spike in how often it appears in search results (impressions), yet almost no one actually clicks on it. These spikes might have been caused by AI tools like ChatGPT sending out long, bot-like searches that aren’t coming from real people.
Even though OpenAI says the problem is fixed, the incident shows just how easy it is for a small bug in an AI system to ripple out and cause confusion on a much larger scale. As AI becomes more connected to the internet through AI browsers, it’s important for users to be diligent about staying safe and be aware of possible risks.
Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.
More from Tom’s Guide
Back to Laptops
SORT BYPrice (low to high)Price (high to low)Product Name (A to Z)Product Name (Z to A)Retailer name (A to Z)Retailer name (Z to A)

