Cybersecurity

Bing AI OpenClaw Malware: Search Poisoning Exposed

Have you ever blindly trusted a link because an AI chatbot served it to you on a silver platter? It’s a habit many of us are forming, assuming that these sophisticated algorithms filter out the bad actors. But a recent incident involving Microsoft’s Bing AI and the viral AI agent OpenClaw proves that trust might be premature.

In early February 2026, users searching for "OpenClaw Windows" were met with a top recommendation from Bing’s AI-enhanced search. The link pointed directly to a GitHub repository that looked legitimate. However, according to cybersecurity researchers at Huntress, this was a trap. The repository wasn’t the official home of the popular AI agent; it was a fake distribution point for malware.

This incident highlights a dangerous evolution in cyberattacks, where malicious actors are moving beyond traditional SEO poisoning and finding ways to manipulate Large Language Models (LLMs) into validating their scams.

How did the OpenClaw malware campaign work?

The attack relied heavily on confusion and timing. OpenClaw, a legitimate open-source AI agent created by Peter Steinberger, has gone through a somewhat chaotic rebranding process. It was formerly known as Clawdbot, then Moltbot, before settling on OpenClaw. This identity crisis created a perfect storm for scammers.

Illustration related to Bing AI OpenClaw Malware: Search Poisoning Exposed

According to Huntress researchers Jai Minton and Ryan Dowd, malicious actors spun up fake GitHub repositories under the organization name "openclaw-installer" to appear authentic. These repositories were active between February 2 and February 10, 2026. When users downloaded the installer, they weren’t just getting an AI agent; they were deploying "GhostSocks," a proxy tool, along with information stealers delivered via "Stealth Packer."

The timing was critical. Interest in the tool had spiked in late January following its rebranding and viral popularity. Attackers capitalized on users looking for a quick Windows installer, knowing that the official project’s rapid changes left many users unsure of where to look. This interest spike occurred before Peter Steinberger’s announcement on February 14, 2026, that he was joining OpenAI, which was after the malware campaign (February 2-10) had ended.

Why did Bing AI recommend malicious code?

This is the most unsettling part of the story. You might expect a search engine owned by Microsoft to be skeptical of a random GitHub repository, especially since Microsoft also owns GitHub. However, the Huntress team noted that simply hosting the malware on GitHub appeared to be enough to "poison" the Bing AI search results.

The mechanics here suggest a form of "LLM optimization poisoning." Because the fake repository used relevant keywords and was hosted on a high-trust domain like GitHub, Bing’s AI interpreted it as the most relevant answer to the user’s query. Jai Minton from Huntress explained that their analysis revealed a user had specifically searched for "OpenClaw Windows," prompting the AI to generate a direct link to the newly created malicious repo.

It wasn’t just a search result buried on page two; it was an AI-generated suggestion, which carries an implicit badge of verification for many users. This effectively turned the search engine into an unwitting accomplice, distributing malware to users who thought they were engaging with a cutting-edge tech tool.

What are the risks of using AI agents like OpenClaw?

While the malware campaign was external to the actual OpenClaw project, the incident has sparked a broader conversation about the security risks of autonomous AI agents. Even without the malware, running tools like OpenClaw requires significant caution.

Diagram related to Bing AI OpenClaw Malware: Search Poisoning Exposed

Microsoft’s own Defender team has issued warnings stating that OpenClaw should be treated as "untrusted code execution with persistent credentials." Because these agents are designed to automate local tasks—integrating deeply with files, apps, and active sessions—they effectively act as a "digital soul" for whoever controls them.

If an attacker gains control of such an agent, they don’t just get your passwords; they get contextual access to your entire digital life. The legitimate OpenClaw project is powerful, but that power makes it a high-value target. This creates a dual threat landscape: the risk of downloading fake, malware-laden versions, and the inherent risk of running the real software on a standard workstation without proper sandboxing.

What To Watch

This incident exposes a critical vulnerability in the current AI ecosystem: the "trust gap" between search algorithms and content verification. Microsoft is in a unique position as the owner of the search engine (Bing), the repository host (GitHub), and the operating system (Windows), yet the dots failed to connect in real-time to stop this campaign. We should expect to see a rapid shift in how AI search tools index code repositories, likely moving from "relevant by default" to "verified by default" for executable files. Until then, the biggest losers are developers of legitimate open-source tools, who now face an audience rightfully paranoid about whether the "official" download link is actually a trap.

Get our analysis in your inbox

No spam. Unsubscribe anytime.

Share this article

Leave a Comment