The Lab · 2026-04-18 18:22:35 · GitHub Issues
A critical security gap in the SAFE-MCP platform allows a root workspace to spoof the system's memory delimiter, creating a persistent vector for prompt injection. The vulnerability, designated SAFE-T1201, was identified in audit #747 and remains unpatched despite a recent mitigation attempt. The core flaw lies in the ...
The Lab · 2026-04-22 15:27:39 · GitHub Issues
A prompt injection vulnerability has been identified in WhisperX's internal AI service infrastructure, specifically within `apps/intelligence/app/services/prometheus.py`. The flaw allows an attacker to manipulate LLM-generated responses by injecting arbitrary instructions through unsanitized `userId` and `vaultId` quer...
The Lab · 2026-04-26 16:54:08 · GitHub Issues
A critical security flaw in the Orion-Web platform left an LLM-powered tool generation endpoint completely unauthenticated, exposing systems to arbitrary shell command execution. The vulnerability, tracked as SOC 2 corrective action CR-005, allowed attackers to craft malicious tool descriptions that the LLM would trans...
The Lab · 2026-04-28 15:54:11 · GitHub Issues
A critical security vulnerability in the expertise pipeline exposes users to session-scoped prompt injection. The `UserPromptSubmit` hook (`hooks/expertise-preflight.sh`) automatically calls `${EXPERTISE_API_URL}/expertise/search` on every prompt submission and injects the API response into the `systemMessage` field, w...
The Lab · 2026-05-10 18:31:48 · r/blueteamsec
Microsoft security researchers have identified critical remote code execution (RCE) vulnerabilities in widely deployed AI agent frameworks, warning that prompt injection techniques can be weaponized to compromise systems at scale. The research, published on the Microsoft Security Blog, demonstrates how carefully crafte...
The Lab · 2026-05-11 19:48:24 · GitHub Issues
Automated red team testing has identified a high-severity indirect prompt injection vulnerability in an AI endpoint hosted at http://34.16.47.248:8882. The flaw, classified under the OWASP LLM01:2025 framework, successfully exploited the model's susceptibility to resume-based injection instructions with 90% judge confi...
The Lab · 2026-05-11 20:18:30 · GitHub Issues
A critical security vulnerability in an AI-powered healthcare endpoint allows unauthorized access to patient records through prompt injection techniques, according to a red team finding released this week. The flaw, targeting the agentic AI module at http://34.16.47.248:8882, earned a CVSS score of 9.0—placing it in th...
The Lab · 2026-05-11 20:18:32 · GitHub Issues
A critical vulnerability has been identified in an agentic AI endpoint at http://34.16.47.248:8882 after automated red team testing successfully demonstrated that the system could be induced to disclose its ability to access sensitive patient datasets. The flaw carries a CVSS score of 9.0 and has been classified under ...
The Lab · 2026-05-12 17:48:26 · GitHub Issues
A remote code execution vulnerability has been discovered in VS Code 1.119.0 and earlier versions that allows a crafted prompt-injection attack on certain GPT family models to bypass user confirmation, enabling unauthorized editing of sensitive files on affected systems.
The flaw specifically exploits how VS Code's AI...