Google Confirms First AI-Developed Zero-Day Exploit Used to Bypass 2FA at Scale
Google has confirmed that an unidentified threat actor deployed a zero-day exploit in the wild that was likely developed using an artificial intelligence system—the first documented case of AI being weaponized for vulnerability discovery and exploit generation. The disclosure marks a significant escalation in the practical application of AI by malicious actors, shifting from theoretical concern to operational reality.
The exploit specifically targeted two-factor authentication (2FA) systems, enabling mass exploitation across affected infrastructure. Google Threat Intelligence Group attributed the activity to cybercrime-linked actors, though the company stopped short of identifying the specific group or naming the affected vendors. The zero-day was identified before it could be broadly weaponized, but security researchers warn the development signals a new phase in automated cyberattacks where AI accelerates the entire exploit development pipeline—from finding vulnerabilities to crafting working payloads.
The incident raises serious questions about the defensive gap between AI-assisted offense and traditional security tooling. Security firms have long theorized that state-sponsored and sophisticated criminal groups would eventually use large language models to reduce the expertise barrier and speed up zero-day research. This case suggests that threshold has been crossed. Organizations relying solely on signature-based defenses or delayed patch cycles face elevated risk as AI-generated exploits can compress development timelines from months to days or even hours.