Corporate and individual developers relying on AI tools for code development face a unique risk, as demonstrated by a recent security flaw discovered in Cursor, an AI-powered code editing software. Threat researchers at AimLabs revealed that a subtle data-poisoning attack targeting Cursor could grant attackers remote access to user devices. As the use of AI tools accelerates across industries, the incident has raised concerns about the preparedness of organizations to manage security vulnerabilities inherent in AI agents. Organizations using earlier versions of Cursor must take immediate action, as only version 1.3 contains the necessary fix.
Other reports on AI-powered software attacks have previously focused on data leaks or accidental code exposure, whereas this case involves prompt injection achieving remote code execution, even through integrated platforms like Slack. While similar manipulation vulnerabilities have been identified, execution of malicious commands via prompt injection linked with external tool integration remains relatively uncommon. Prior discussions have also highlighted persistent vulnerabilities within model agent workflows, but immediate remote code execution with elevated privileges has not been as widely documented.
How Did Researchers Uncover the Cursor Vulnerability?
AimLabs identified the critical flaw, assigned CVE-2025-54135, when examining how Cursor’s agent fetched data via the Model Contest Protocol (MCP) server. This protocol facilitates tool access from platforms like Slack and GitHub, and upon receiving a specially crafted prompt, Cursor would automatically execute malicious commands. The investigative team found that by using a single prompt delivered through an integrated Slack channel, they could silently alter Cursor’s configuration and trigger unauthorized code execution without user intervention.
What Security Risks Does Prompt Injection Pose for AI Agents?
Prompt injection attacks reveal a significant challenge for AI models embedded in workflows. Since these agents process external instructions with high privileges, there is a risk they may follow harmful commands originating from untrusted sources. As AimLabs stated,
“The tools expose the agent to external and untrusted data, which can affect the agent’s control-flow.”
This access could let attackers hijack the agent’s user session and take unauthorized actions on the user’s behalf.
Will Similar Vulnerabilities Continue to Affect AI-Powered Tools?
Cursor’s developer team addressed the flaw shortly after its disclosure, releasing a patch in version 1.3, but the underlying risk remains. AimLabs warned that the nature of large language models, which rely on external commands and prompts, makes similar vulnerabilities likely across other platforms using agent-based AI.
“Because model output steers the execution path of any AI agent, this vulnerability pattern is intrinsic and keeps resurfacing across multiple platforms,”
the research emphasized, noting the need for improved agent design and workflow security.
Users of Cursor are advised to upgrade to version 1.3 as earlier versions are still vulnerable to prompt-injection driven remote code execution. Developers integrating AI and natural language processing tools should reassess their security postures, especially when agents possess privileges that go beyond simple code suggestions. Reviewing all points where agents receive external instruction, enforcing logging, and deploying robust access controls can provide mitigation, but residual risk is likely in systems that rely on external prompts to drive agent behavior. Anyone adopting AI-powered coding assistants should regularly monitor for related security anomalies and stay alert to new advisories and patches for their tools.