Google Antigravity AI Coding Tool Contained Critical Prompt Injection Flaw, Allowing Malicious Code Execution
A critical security vulnerability in Google's Antigravity AI coding tool could have allowed attackers to bypass safeguards and execute malicious code. Researchers identified a prompt injection bug that, if exploited, would have granted attackers the ability to run arbitrary commands on systems using the tool. This flaw represents a significant breach in the security perimeter of an AI-powered development assistant designed to generate and manage code.
The flaw specifically resided within Google's Antigravity tool, an AI system intended to assist developers. According to the report, the prompt injection technique enabled attackers to manipulate the AI's instructions, effectively tricking it into executing harmful commands it was designed to block. This bypass occurred despite the presence of built-in security measures meant to prevent such exact scenarios, highlighting a gap between intended safeguards and practical exploitability in generative AI applications.
The discovery places immediate scrutiny on the security frameworks surrounding AI coding assistants, a rapidly growing sector. While Google has reportedly fixed the issue, the incident underscores the inherent risks of integrating powerful, autonomous AI into software development pipelines. It raises urgent questions for enterprise adopters about the robustness of internal security reviews for third-party AI tools and the potential for similar vulnerabilities in other AI-powered development platforms.