IDEsaster Exposes 30 Flaws in AI Coding Tools

AI powered IDEs are opening new attack surfaces by blending autonomous agents with old security assumptions.

AI coding tool security diagram
IDEsaster shows how AI tooling can convert harmless IDE features into attack paths

Thirty vulnerabilities in AI coding tools show how prompt injection and auto approved actions can escalate into data theft and remote code execution. Every major AI IDE tested was vulnerable.

Security researcher Ari Marzouk has disclosed a set of more than 30 vulnerabilities in AI powered IDEs. The issues are grouped under the name IDEsaster. They impact tools like Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, Claude Code, and Cline. Twenty four of the flaws now have CVE identifiers. The research shows that identical attack chains worked across every AI IDE that was tested.

AI IDEs combine large language models, autonomous tool calls, and long standing IDE features. Those features were never built for environments where an AI agent can be hijacked through prompt injection. Once an attacker pollutes the model's context, the agent begins executing legitimate actions that leak data or enable command execution.

IDEsaster chains three steps. The attacker hijacks the model's context. The AI agent performs auto approved actions without user interaction. The IDE's trusted features are triggered to read files, write files, modify settings, or execute commands. This creates data exfiltration and RCE paths using features developers assume are safe.

Context hijacking can occur through invisible Unicode characters inside pasted code, poisoned URLs, malicious readme files, or compromised Model Context Protocol servers. The model ingests the attacker controlled instructions and treats them as valid input.

Several of the disclosed CVEs show how this works. In Cursor, Roo Code, JetBrains Junie, Kiro.dev, GitHub Copilot, and Claude Code, attackers can prompt the agent to read sensitive files and write a JSON file referencing a remote schema on an attacker domain. The IDE retrieves that schema and leaks the data. In other flaws affecting GitHub Copilot, Cursor, Roo Code, Zed.dev, and Claude Code, prompt injection modifies settings files like .vscode/settings.json or .idea/workspace.xml. This changes paths such as php.validate.executablePath or PATH_TO_GIT to malicious executables.

Additional vulnerabilities let attackers edit workspace configuration files to achieve code execution. These attacks rely on auto approved file writes. Since many AI IDEs approve workspace file changes by default, the execution chain happens without user interaction.

Marzouk recommends using AI IDEs only with trusted projects and inspecting external sources for hidden instructions. MCP servers should be trusted and monitored because their tools may pull attacker controlled data. Vendors should enforce least privilege on LLM tools, reduce prompt injection vectors, sandbox execution, and test for path traversal or command injection.

The disclosure overlaps with other AI tool vulnerabilities. OpenAI Codex CLI contains a command injection flaw that executes MCP configured commands at startup. Google Antigravity has indirect prompt injection weaknesses that can harvest credentials or implant persistent backdoors inside trusted workspaces. A new class of attack called PromptPwnd uses prompt injection to manipulate AI agents attached to CI pipelines.

These findings show how AI agents expand the attack surface of development environments. The model does not reliably distinguish user intent from malicious context. According to Aikido researcher Rein Daelman, any repository that uses AI for triage, labeling, suggestions, or automated replies is exposed to prompt injection and supply chain compromise.

Marzouk argues that Secure for AI is now required. Developers must design systems with the expectation that AI features can be abused as they evolve. That principle is needed to prevent future IDEsaster style failures.

Blackout VPN exists because privacy is a right. Your first name is too much information for us.

Keep learning

FAQ

What is IDEsaster

IDEsaster is the name for a set of vulnerabilities in AI powered IDEs that chain prompt injection with auto approved actions to achieve data theft or command execution.

Which tools are affected

Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, Claude Code, and Cline all contain related flaws.

How do these attacks work

An attacker hijacks the model's context, triggers auto approved tool calls, and leverages legitimate IDE features to read files, write files, or run commands.

Why is prompt injection central to these vulnerabilities

Prompt injection lets attackers feed hidden instructions into the model. The AI agent then performs harmful actions while believing it is following user intent.

What can developers do to reduce risk

Use AI IDEs only with trusted projects, restrict MCP servers, review external sources for hidden instructions, and enforce least privilege on LLM tools.