Microsoft VS Code Vulnerability Allows AI Agent to Edit Sensitive Files Without User Consent via Prompt Injection
A remote code execution vulnerability has been discovered in VS Code 1.119.0 and earlier versions that allows a crafted prompt-injection attack on certain GPT family models to bypass user confirmation, enabling unauthorized editing of sensitive files on affected systems.
The flaw specifically exploits how VS Code's AI agent processes prompts when integrated with generative AI models. An attacker capable of embedding malicious instructions within prompts could manipulate the agent into modifying sensitive configuration files, credentials, or other protected resources without triggering the standard user consent mechanisms that normally govern file operations. Microsoft released version 1.119.1 as a corrective update, implementing input validation on repository URLs to be cloned. The company has published the patch commit and associated security guidance under CVE-2026-41109.
The vulnerability raises significant security concerns for developers working with proprietary code, API keys, or environment configurations. As an immediate workaround, users are advised to refrain from including untrusted or external data in GPT prompts within VS Code. Security teams should prioritize updating VS Code deployments and monitor development environments for unexpected file modifications. The issue underscores growing scrutiny of AI agent behaviors and the risks introduced when autonomous systems operate without adequate guardrails against prompt manipulation.