Google Flags First AI-Built Zero-Day in Planned Mass Hack Operation
Google's Threat Intelligence Group claims to have identified the first confirmed case of cybercriminals using artificial intelligence to both discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. The company said it worked with the unnamed vendor to quietly patch the flaw—a two-factor authentication bypass in a popular open source web-based administration platform—before the operation could properly launch.
According to the report shared with The Register ahead of publication, the attackers appear to have used an AI model to identify the security gap and assist in developing a functional exploit. The GTIG believes this represents a significant shift in the threat landscape, demonstrating that AI can accelerate the traditionally labor-intensive process of finding and weaponizing previously unknown vulnerabilities. Google suggested the intervention may have disrupted the operation before it gained traction, though the full scope of the planned campaign remains unclear.
The development raises fresh concerns about AI lowering the barrier for less sophisticated threat actors to launch sophisticated intrusions at scale. Security researchers have long warned that AI could democratize zero-day development, and this case—if confirmed—would represent a concrete demonstration of that risk materializing. Organizations relying on affected platforms are advised to verify their patches are current and monitor for indicators of compromise, though Google did not disclose which specific open source project was involved.