Google stops first known AI-developed zero-day exploit targeting 2FA bypass
Tags AI ยท Security ยท Infrastructure

Google's Threat Intelligence Group (GTIG) reported on May 11 that it disrupted a zero-day exploit developed with AI assistance that was being prepared for mass exploitation against an open-source web-based system administration tool. The exploit targeted a high-level semantic logic flaw where the developer hardcoded a trust assumption in the platform's two-factor authentication system, potentially allowing attackers to bypass 2FA at scale. Google researchers found evidence of AI involvement in the exploit's code, including a hallucinated CVSS score and structured, textbook-style formatting consistent with LLM training data. The Python-based exploit was being developed by prominent cybercrime threat actors. GTIG also reported that hackers are using persona-driven jailbreaking to get AI models to find vulnerabilities, and are feeding AI models whole repositories of vulnerability data to refine exploit payloads. Google stated it does not believe its own Gemini model was used.
Technical significance
This is the first confirmed case of AI being used to develop a zero-day exploit for mass deployment, marking a qualitative shift in the threat landscape. The exploit targeted a logic flaw โ the type of vulnerability that requires semantic understanding rather than pattern matching โ suggesting AI is moving beyond simple vulnerability scanning into creative exploit development. The hallucinated CVSS score in the exploit code is a fingerprint that could become a detection signal, but also highlights how AI-generated exploits may contain subtle errors that defenders can exploit.