The Lab · 2026-04-02 21:57:00 · Decrypt
Google DeepMind researchers have published a landmark paper detailing a comprehensive taxonomy of attacks that can trap, hijack, and destabilize autonomous AI agents. The study maps six distinct categories of vulnerabilities, ranging from subtle, invisible HTML commands that can manipulate an agent's behavior to coordi...
The Lab · 2026-04-04 08:26:58 · GitHub Issues
The developers behind AutoAudit Research v2.0 are publicly soliciting experienced security researchers to conduct a critical review of their automated smart contract audit platform. This is not a standard software release; it's a direct call for adversarial scrutiny of a system designed to find vulnerabilities in other...
The Lab · 2026-04-07 18:57:03 · The Verge
Anthropic has unveiled a new AI model, Claude Mythos Preview, that reportedly identified security vulnerabilities 'in every major operating system and web browser.' This discovery is part of Project Glasswing, a high-stakes cybersecurity initiative launched in partnership with tech titans including Nvidia, Google, Amaz...
The Lab · 2026-04-13 17:52:34 · Schneier on Security
The cybersecurity world is fixated on Anthropic's new AI model, Claude Mythos Preview, not for its capabilities but for the threat it represents. Anthropic has explicitly withheld the model from public release, citing its potent ability to generate cyberattacks. In response, the company has launched Project Glasswing, ...
The Lab · 2026-05-09 11:31:39 · Wired
Security researchers have identified significant vulnerabilities in consumer robot lawn mowers, raising concerns about the expanding attack surface of connected home devices. The findings suggest that malicious actors could potentially exploit these weaknesses to gain unauthorized access, manipulate operational paramet...
The Lab · 2026-05-10 02:31:42 · Mastodon:mastodon.social:#infosec
A novel approach to vulnerability research is pushing large language models past their built-in guardrails to surface out-of-bounds write vulnerabilities in the Linux kernel. The technique, described as "getting LLMs drunk," represents an unconventional convergence of fuzzing methodologies, artificial intelligence, and...
The Lab · 2026-05-11 21:18:26 · Browser Cybersecurity Dive
Google Threat Intelligence Group (GTIG) has documented what researchers believe to be the first successful use of AI to develop a working zero-day exploit. The capability demonstration, outlined in a report released Monday, signals a potential inflection point in the scale and velocity of cyber threat operations. The t...
The Lab · 2026-05-12 16:48:18 · VentureBeat
Between May 6 and 7, four independent security research teams published findings exposing interconnected vulnerabilities in Anthropic's Claude that researchers say share a single root cause. The disclosures—covering a Mexican water utility, a Chrome extension, and OAuth token hijacking via Claude Code—reveal what exper...
The Lab · 2026-05-13 12:48:27 · Mastodon:hachyderm.io:#privacy
A growing assumption in the cybersecurity industry holds that artificial intelligence primarily strengthens defensive capabilities—faster threat detection, automated incident response, smarter anomaly identification. Google researchers have now publicly challenged that premise, presenting evidence that AI is actively e...