VS Code Copilot Chat Vulnerability: Prompt Injection Can Trigger Remote Code Execution via Unicode URL Spoofing
A critical remote code execution vulnerability has been disclosed in Microsoft's VS Code Copilot Chat, exposing users to potential compromise through a sophisticated prompt injection attack. The flaw, present in versions 0.37.2 and earlier, allows a maliciously manipulated AI agent to trick users into opening or fetching malicious URLs. The attack hinges on the use of Unicode look-alike characters to craft URLs that visually impersonate trusted domains, a technique known as homograph or IDN spoofing. This bypasses user vigilance, as the link appears legitimate, creating a direct path for attackers to execute arbitrary code on the victim's machine.
The vulnerability specifically resides in how Copilot Chat presents URLs to the user. Prior to the patch, the interface did not properly punycode-encode internationalized domain names (IDNs), allowing attackers to exploit visual similarities between characters from different scripts. For instance, a Cyrillic 'а' could be used in place of a Latin 'a'. Users who approve a fetch request or click on such a spoofed link from a compromised agent session are at immediate risk. Microsoft has assigned CVE-2026-21523 to this issue and released a fix in VS Code Copilot Chat version 0.37.3, which now correctly encodes URLs to display their punycode representation, making spoofing attempts visually apparent.
This incident highlights the expanding attack surface introduced by AI-powered coding assistants and the unique security challenges of prompt injection. It places immediate pressure on developers and organizations using older versions of the extension to update immediately. The only available workaround for unpatched systems is a strict policy of not approving any URL fetch or open requests originating from Copilot Chat agent sessions, severely limiting the tool's functionality. The disclosure triggers broader scrutiny on how AI agents handle external resource requests and user interactions, signaling a need for more robust input validation and output sanitization frameworks within the rapidly evolving AI-integrated development ecosystem.