WhisperX tag archive

#prompt-injection

This page collects WhisperX intelligence signals tagged #prompt-injection. It is designed for humans, search engines, and AI agents: each item links to a canonical source-backed record with sector, source, timestamp, credibility, and exportable structured data.

Latest Signals (9)

The Lab · 2026-04-18 18:22:35 · GitHub Issues

1. SAFE-MCP Audit #747: GLOBAL Memory Delimiter-Spoofing Gap Enables Prompt Injection (SAFE-T1201)

A critical security gap in the SAFE-MCP platform allows a root workspace to spoof the system's memory delimiter, creating a persistent vector for prompt injection. The vulnerability, designated SAFE-T1201, was identified in audit #747 and remains unpatched despite a recent mitigation attempt. The core flaw lies in the ...

The Lab · 2026-04-22 15:27:39 · GitHub Issues

2. Prompt Injection Flaw in Nester's Prometheus Service Exposes Financial Advisory AI to Manipulation via Unsanitized User Parameters

A prompt injection vulnerability has been identified in WhisperX's internal AI service infrastructure, specifically within `apps/intelligence/app/services/prometheus.py`. The flaw allows an attacker to manipulate LLM-generated responses by injecting arbitrary instructions through unsanitized `userId` and `vaultId` quer...

The Lab · 2026-04-26 16:54:08 · GitHub Issues

3. Critical Authentication Bypass in Orion-Web LLM Tool Generation Allowed Remote Code Execution

A critical security flaw in the Orion-Web platform left an LLM-powered tool generation endpoint completely unauthenticated, exposing systems to arbitrary shell command execution. The vulnerability, tracked as SOC 2 corrective action CR-005, allowed attackers to craft malicious tool descriptions that the LLM would trans...

The Lab · 2026-04-28 15:54:11 · GitHub Issues

4. Live Prompt Injection via Hardcoded expertise-api Endpoint Exposes Claude Code, Copilot Users

A critical security vulnerability in the expertise pipeline exposes users to session-scoped prompt injection. The `UserPromptSubmit` hook (`hooks/expertise-preflight.sh`) automatically calls `${EXPERTISE_API_URL}/expertise/search` on every prompt submission and injects the API response into the `systemMessage` field, w...

The Lab · 2026-05-10 18:31:48 · r/blueteamsec

5. Microsoft Exposes Critical RCE Vulnerabilities in AI Agent Frameworks

Microsoft security researchers have identified critical remote code execution (RCE) vulnerabilities in widely deployed AI agent frameworks, warning that prompt injection techniques can be weaponized to compromise systems at scale. The research, published on the Microsoft Security Blog, demonstrates how carefully crafte...

The Lab · 2026-05-11 19:48:24 · GitHub Issues

6. AI Endpoint at 34.16.47.248:8882 Vulnerable to Indirect Prompt Injection via Resume Technique

Automated red team testing has identified a high-severity indirect prompt injection vulnerability in an AI endpoint hosted at http://34.16.47.248:8882. The flaw, classified under the OWASP LLM01:2025 framework, successfully exploited the model's susceptibility to resume-based injection instructions with 90% judge confi...

The Lab · 2026-05-11 20:18:30 · GitHub Issues

7. Critical Prompt Injection Vulnerability Exposes Patient Records Through AI Endpoint at IP 34.16.47.248:8882

A critical security vulnerability in an AI-powered healthcare endpoint allows unauthorized access to patient records through prompt injection techniques, according to a red team finding released this week. The flaw, targeting the agentic AI module at http://34.16.47.248:8882, earned a CVSS score of 9.0—placing it in th...

The Lab · 2026-05-11 20:18:32 · GitHub Issues

8. Agentic AI Endpoint Exposed PHI Access Capabilities in Critical LLM01:2025 Prompt Injection Test — CVSS 9.0

A critical vulnerability has been identified in an agentic AI endpoint at http://34.16.47.248:8882 after automated red team testing successfully demonstrated that the system could be induced to disclose its ability to access sensitive patient datasets. The flaw carries a CVSS score of 9.0 and has been classified under ...

The Lab · 2026-05-12 17:48:26 · GitHub Issues

9. Microsoft VS Code Vulnerability Allows AI Agent to Edit Sensitive Files Without User Consent via Prompt Injection

A remote code execution vulnerability has been discovered in VS Code 1.119.0 and earlier versions that allows a crafted prompt-injection attack on certain GPT family models to bypass user confirmation, enabling unauthorized editing of sensitive files on affected systems. The flaw specifically exploits how VS Code's AI...