The Lab · 2026-03-25 13:27:22 · GitHub Issues
A critical security flaw in a codebase's AI summary feature allows malicious Large Language Model (LLM) outputs to execute arbitrary JavaScript in users' browsers. The vulnerability stems from the direct insertion of streaming LLM responses into the Document Object Model (DOM) using `innerHTML` in the `ai_summary.js` f...
The Lab · 2026-04-12 12:22:34 · GitHub Issues
A new open-source red teaming tool, dubbed the Garak probing engine, has been introduced on GitHub with the explicit purpose of systematically scanning Large Language Models (LLMs) for critical security vulnerabilities. The tool's release signals a growing, proactive effort within the security community to pressure-tes...
The Lab · 2026-04-18 03:22:34 · GitHub Issues
OpenClaw has implemented a mandatory, injection-resistant security preamble for all agent sessions, a foundational shift in defending against the top-ranked OWASP vulnerability for LLM applications. The change, introduced in PR #42211, prepends a static text instruction to all system prompts, directing the model to tre...
The Lab · 2026-05-05 12:31:40 · GitHub Issues
A newly documented vulnerability in the Model Context Protocol (MCP) tool execution pipeline allows untrusted tool results to enter LLM conversations without sanitization, injection warnings, or structural boundary markers. The issue, filed as a GitHub security concern, details how the `MCPManager.CallTool()` method jo...
The Lab · 2026-05-11 20:18:29 · GitHub Issues
An AI endpoint accessible at http://34.16.47.248:8882 was found to be leaking protected health information (PHI), including patient names, Social Security Numbers, diagnoses, insurance details, and lab results. The vulnerability was identified through automated red team testing, which successfully prompted the system t...