Google Caught AI-Built Zero-Day Before Cybercrime Group Could Wield It in Mass Attacks
Google's Threat Intelligence Group uncovered an AI-generated zero-day exploit and neutralized it before a known cybercrime operation could weaponize the vulnerability in a large-scale campaign—marking what analysts describe as the first concrete evidence that artificial intelligence is actively producing working exploits targeting previously unknown flaws.
The flaw resided in a Python script tied to a popular open-source web-based administration tool, Google said in a report published Monday. The defect allowed attackers to circumvent two-factor authentication protections. Google patched the vulnerability and notified the vendor before the criminal group could initiate mass exploitation, averting what could have been a widespread intrusion campaign. Google declined to name the affected software. While security researchers have long theorized that threat actors would eventually delegate exploit development to AI systems, this case represents the first time GTIG documented compelling proof of that scenario unfolding in practice.
The discovery raises the bar for defenders already stretched thin by an accelerating vulnerability landscape. GTIG chief analyst John Hultquist told CyberScoop the find likely represents only a fraction of actual AI-assisted exploit activity. "We finally uncovered some evidence this is happening," Hultquist said. "This is probably the tip of the iceberg and it's certainly not going to be the last." Security teams now face the prospect that AI-accelerated exploit development could outpace traditional disclosure and patching cycles, compressing the window between vulnerability discovery and active weaponization.