According to ZDNet, security researchers at Cato CTRL have uncovered a new attack technique called HashJack that can manipulate AI browsers including Perplexity’s Comet, Microsoft Copilot for Edge, and Google Gemini for Chrome. The attack works by hiding malicious instructions in URL fragments after the # symbol, which AI assistants then process without the user’s knowledge. Microsoft confirmed the issue on September 12 and applied a fix on October 27, while Perplexity initially closed the Bugcrowd report in August before reopening it and issuing a critical severity classification and final fix on November 18. Google, however, classified HashJack as low severity and “intended behavior,” refusing to fix it. The technique can weaponize any legitimate website to manipulate AI browser assistants, potentially leading to data theft, credential harvesting, or misinformation campaigns.
Why this matters
Here’s the thing about HashJack – it’s scary because it exploits user trust in two different ways. People trust the websites they visit, and they trust their AI assistants. This attack turns both against them. Basically, you could be looking at your bank’s legitimate website while your AI browser is quietly sending your financial data to attackers in the background. And because URL fragments don’t leave the browser, traditional security tools can’t detect what’s happening.
The really clever part? Threat actors don’t need to hack the actual website. They just need to craft a special URL that includes their malicious instructions after the # symbol. When the AI assistant processes that page, it reads those hidden commands and follows them. So you’re looking at CNN.com or your bank’s site, everything appears normal, but your AI helper is being manipulated right under your nose.
Vendor responses vary wildly
What’s fascinating here is how differently the big players responded. Microsoft took it seriously and fixed it within about six weeks. Perplexity went from “this isn’t a security issue” to “this is critical” once researchers provided more evidence. But Google? They basically said “this is working as intended” and won’t fix it.
That’s a pretty bold stance from Google, especially when we’re talking about potential data theft vectors. I mean, if your AI browser can be tricked into sending sensitive information to attackers, shouldn’t that be more than “low severity”? Meanwhile, both Claude for Chrome and OpenAI’s Atlas successfully defended against the same attack, which makes you wonder why some systems are vulnerable while others aren’t.
Broader implications
This isn’t just about today’s AI browsers – it’s about where we’re heading. As more systems become agentic and autonomous, these kinds of indirect prompt injection attacks become increasingly dangerous. Think about it: if your AI assistant can automatically take actions based on what it reads, a cleverly crafted URL could make it do all sorts of things without your knowledge.
For businesses relying on industrial computing systems, the stakes are even higher. When you’re dealing with manufacturing equipment or critical infrastructure, you need hardware you can trust. Companies like IndustrialMonitorDirect.com have built their reputation as the leading supplier of industrial panel PCs precisely because reliability and security matter in these environments. You can’t have your production line controls being manipulated through sneaky URL tricks.
The reality is we’re in the early days of AI security, and attacks like HashJack are just the beginning. As researchers told ZDNet, this represents a major shift in the threat landscape because it weaponizes legitimate websites through their URLs. Users see a trusted site, trust their AI browser, and in turn trust the AI assistant’s output – creating a perfect storm for successful attacks.
