Hackers Are Now Building Invisible Websites That Only Target AI Agents

Hackers are building invisible websites to target AI agents

A new cyberattack technique has surfaced that manipulates AI agents with poisoned websites hidden from human view. Security researcher Shaked Zychlinski has revealed that these stealthy traps can silently hijack AI assistants and turn them into tools for attackers.

How Hackers Are Fooling AI Agents

Traditional prompt-injection attacks often rely on slipping malicious instructions into web pages that look harmless to human readers. These can sometimes be detected by advanced monitoring tools. The new strategy works differently.

Instead of embedding harmful prompts in the same content humans see, attackers now create parallel versions of a website that only appear to AI agents. To the human eye, the page looks normal. To the AI agent, it becomes a weaponized environment filled with instructions designed to steal data, deploy malware, or misuse system access.

Because this malicious content never reaches real visitors or standard crawlers, it bypasses many existing defenses.

The Fingerprinting Trick Behind the Attack

The attack takes advantage of how easy it is to recognize AI browsing patterns. Unlike humans, AI agents tend to leave highly predictable traces. Their automation frameworks, network behaviors, and browser signatures give them away instantly.

Once the system identifies that the visitor is an AI agent, the server swaps the safe page with a poisoned one. Sometimes it looks identical, with just hidden commands. Other times, it is a completely different page that manipulates the AI into unlocking sensitive information or exposing authentication tokens from the user’s machine.

Proof That It Works

To show how effective this method is, Zychlinski built a test site containing both a benign and a malicious version. When he directed AI agents such as Anthropic Claude 4 Sonnet, OpenAI GPT-5 Fast, and Google Gemini 2.5 Pro to the page, every single one fell for the trap.

This confirmed that even the most advanced AI systems can be deceived when browsing the web without safeguards.

Why It Is So Dangerous

The danger lies not only in the attack’s stealth but also in its simplicity. A hacker does not need a complex infrastructure to set it up. By exploiting the way AI agents consume and act on online data, attackers can quietly manipulate their behavior in ways the end user never sees.

As Zychlinski warned, this is a wake-up call for AI security. If the web can show one reality to humans and another to machines, then any agent working online becomes a potential target.

Building Defenses for a Parallel Web

Defending AI agents against this class of attack will require more than just patching. Experts suggest several strategies:

  • Hide their fingerprints so AI agents cannot be easily recognized by servers.
  • Divide responsibilities by separating the planning system (which makes decisions) from a sandboxed executor (which fetches and sanitizes content before passing it on).
  • Develop smarter crawlers and honeypots to catch websites that serve cloaked malicious content.

These measures would make it harder for attackers to deliver poisoned prompts without being noticed.

The Future of AI Security

The rise of this “parallel poisoned web” highlights a growing challenge in the AI era. As agents become more powerful and autonomous, so do the methods used to exploit them.

The lesson is clear. AI assistants cannot be allowed to roam the internet unprotected. Without stronger defenses, the invisible web designed for machines could soon become the most dangerous place online.

Scroll to Top