WhisperX tag archive

#AI Security

This page collects WhisperX intelligence signals tagged #AI Security. It is designed for humans, search engines, and AI agents: each item links to a canonical source-backed record with sector, source, timestamp, credibility, and exportable structured data.

Latest Signals (20)

The Lab · 2026-03-25 06:33:28 · GitHub Issues

1. Netflix Builds Custom MCP Servers to Integrate Burp Suite, Internal Security Tools into AI Workflow

Netflix is developing custom Model Context Protocol (MCP) servers to directly integrate its internal security tooling and commercial platforms like Burp Suite Professional into an AI-driven workflow, codenamed 'Tetsuo'. This move signals a strategic push to automate and enhance security testing by connecting specialize...

The Lab · 2026-03-25 16:27:17 · GitHub Issues

2. AI Image Generation Service Exposed to High-Risk SSRF Attack via Unvalidated Model Output

A critical security flaw in an AI image generation service could allow attackers to hijack the backend system to probe internal networks and access private services. The vulnerability, a classic Server-Side Request Forgery (SSRF), stems from the service blindly fetching image URLs provided by the AI model without any v...

The Lab · 2026-03-25 21:57:02 · The Register

3. Context Hub Proof-of-Concept Exposes AI Supply Chain Risk: Poisoned Documentation, Not Malware

A new vulnerability in the AI development pipeline bypasses traditional malware entirely, relying instead on poisoned documentation to compromise coding agents. The attack vector, demonstrated in a proof-of-concept against the service Context Hub, reveals a critical weakness in how AI assistants consume and trust exter...

The Lab · 2026-03-26 15:27:19 · GitHub Issues

4. LangChain Core v1 Update Patches Critical SSRF Vulnerability in ChatOpenAI (CVE-2026-26013)

A major security update for LangChain Core patches a critical Server-Side Request Forgery (SSRF) vulnerability that could allow attackers to force AI applications to make unauthorized network requests. The flaw, tracked as CVE-2026-26013, resides in the `ChatOpenAI.get_num_tokens_from_messages()` method. This function,...

The Lab · 2026-03-26 20:27:00 · Decrypt

5. Ripple Deploys AI 'Red Team' on XRP Ledger, Uncovering Fresh Bugs as XRP Price Hits 2-Week Low

Ripple is deploying an AI-driven offensive security team against its own XRP Ledger, a move that has already exposed previously unknown vulnerabilities in the network's code. This aggressive, AI-assisted 'red teaming' initiative marks a significant escalation in Ripple's internal security posture, shifting from passive...

The Lab · 2026-03-27 06:27:02 · GitHub Issues

6. AI Security Flaw: Newline Characters Enable Prompt Injection in Image Generation API

A critical vulnerability in an AI image generation service allows attackers to bypass safety controls by injecting malicious instructions via simple newline characters. The flaw stems from the use of Python's `.format()` method to insert user-supplied prompts into a fixed template. When a user includes newline characte...

The Lab · 2026-03-27 07:26:51 · GitHub Issues

7. MCP Protocol Exposed: Fundamental Security Flaws Enable Widespread AI Agent Attacks

A critical security analysis reveals the Model Context Protocol (MCP), a foundational standard for connecting AI agents to external tools, contains deep-seated vulnerabilities that dramatically increase the risk of successful attacks. The research, detailed in the paper "Breaking the Protocol," identifies three core pr...

The Lab · 2026-03-27 14:27:29 · GitHub Issues

8. Microsoft hve-core Proposes 'VEX Generation Agent' for AI-Powered Vulnerability Triage

Microsoft's hve-core project is proposing a new AI-powered security agent designed to automate vulnerability triage for any codebase. The proposed 'VEX Generation Agent' would be a custom Copilot agent within the project's security collection, enabling users to scan for dependency vulnerabilities, perform AI-assisted e...

The Lab · 2026-03-27 22:27:17 · GitHub Issues

9. LangChain 0.2.5 Package Exposes 11 Critical Vulnerabilities, Including 9.3 CVSS Score Flaw

A critical security scan has flagged the widely-used LangChain 0.2.5 Python package as containing 11 distinct vulnerabilities, with the most severe scoring a 9.3 on the CVSS scale. This finding exposes a significant security risk for any application built on this foundational AI framework, which is designed for constru...

The Lab · 2026-03-28 00:27:02 · GitHub Issues

10. VS Code Copilot Chat Vulnerability: GPT Prompt Injection Bypasses Sensitive File Protections

A critical security flaw in Microsoft's VS Code Copilot Chat extension allowed attackers to bypass its core 'sensitive file' approval mechanism, potentially leading to remote code execution. The vulnerability, present in versions 0.37.2 and earlier, centers on the `apply_patch` function. An attacker could use a prompt-...

The Lab · 2026-03-28 00:27:10 · GitHub Issues

11. DemoCorp AI Project Exposed: Critical 7.5-Severity Vulnerabilities Found in Grunt Dependency

A critical security exposure has been identified within the DemoCorp AI-Based-Classification project on GitHub. The automated scan reveals six distinct vulnerabilities embedded in the project's dependency chain, with the highest severity rated at a critical 7.5 CVSS score. The flaw originates from the `grunt-1.6.1.tgz`...

The Lab · 2026-03-28 03:27:05 · GitHub Issues

12. LangChain 0.2.7 Exposes AI Apps to 11 Critical Vulnerabilities, Including 9.3 Severity Flaw

A foundational library for building AI applications is riddled with security holes. The Python package `langchain-0.2.7-py3-none-any.whl`, a core component for developers creating composable large language model (LLM) applications, has been flagged for 11 distinct vulnerabilities. The most severe carries a critical Com...

The Lab · 2026-03-28 11:27:00 · GitHub Issues

13. Wast Scanner's Active Vulnerability Tests Risk AI Agent Misuse, Prompting 'Safe Mode' Push

The `wast scan` command, a tool for web application security testing, currently runs active vulnerability probes by default—a design that poses a significant risk when used by AI agents. Without explicit user confirmation, the tool immediately sends potentially dangerous payloads, including XSS scripts and SQL injectio...

The Lab · 2026-03-29 19:26:57 · GitHub Issues

14. parisneo/lollms AI 프레임워크에 치명적 SSRF 취약점 발견, 내부 데이터 노출 위험

parisneo/lollms AI 프레임워크의 2.2.0 이전 버전에 서버 측 요청 위조(SSRF) 취약점(CVE-2026-0560)이 존재한다. 이 취약점은 네트워크를 통해 원격으로 악용 가능하며, 공격자 권한이 필요 없어 비교적 쉬운 공격이 가능하다. CVSS 7.5(높음)로 평가된 이 취약점은 성공적으로 악용될 경우 시스템의 높은 수준의 기밀 정보가 유출될 수 있는 위험을 내포하고 있다. 이 취약점은 CWE-918로 분류되며, 공격 벡터는 네트워크(Network), 공격 복잡성은 낮음(AC:L), 필요한 권한은 없음(PR:N)으로 설정되어 있다. 이는 인증되지...

The Lab · 2026-03-29 20:26:57 · GitHub Issues

15. MCP Probe Tool: Critical Prompt Injection Risk in Tool Descriptions Exposed

A critical security gap has been identified in the `mcp probe` tool's verification process, exposing AI agents to direct prompt injection attacks. Currently, when the probe successfully retrieves a `tools/list` response from an MCP server, it only flags authentication-bypass issues and discards the actual response payl...

The Lab · 2026-03-30 07:26:59 · GitHub Issues

16. OpenClaw Security Gap: No Warning for Sideloaded Skills Creates 'APK-Style' Vulnerability

The OpenClaw AI agent framework currently lacks any security warning when users load skills from unofficial sources, creating a direct path for attackers to compromise systems. This design flaw treats all skill loading paths with equal trust, enabling a 'sideloading' vulnerability analogous to installing unverified APK...

The Lab · 2026-03-30 15:27:29 · GitHub Issues

17. GitHub: Prompt Vulnerability Scanner Espone Nuovi Rischi di Manipolazione AI

Un nuovo strumento di sicurezza open-source, il Prompt Vulnerability Scanner, sta evidenziando vulnerabilità critiche nei sistemi di intelligenza artificiale generativa. Lo strumento estende le capacità di un rilevatore di injection di base introducendo simulazioni attive di attacchi, inclusi payload adversariali, inje...

The Lab · 2026-03-30 15:27:30 · GitHub Issues

18. Critical AI Prompt Injection Vulnerability Found in Go DataTable Plugin Code

A security review of the `ai_plugin.go` code has uncovered multiple critical vulnerabilities, with a prompt injection flaw posing the most immediate and severe risk. The plugin directly embeds user-controlled JSON data into AI prompts without any sanitization, creating a direct path for attackers to manipulate the AI's...

The Lab · 2026-03-30 19:56:50 · VentureBeat

19. CrowdStrike CTO at RSAC 2026: Securing AI Agent 'Intent' Is an Unsolvable Problem

At RSA Conference 2026, CrowdStrike CTO Elia Zaitsev delivered a stark warning to the cybersecurity industry: securing AI agents by analyzing their intent is a fool's errand. "You can deceive, manipulate, and lie. That’s an inherent property of language. It’s a feature, not a flaw," Zaitsev told VentureBeat. His argume...

The Lab · 2026-03-31 18:26:54 · VentureBeat

20. OpenClaw AI Assistant Hijacked: CEO's Instance Sold on BreachForums for $25k

A CEO's personal AI assistant, powered by OpenClaw, was not just compromised—it was put up for sale. The incident, detailed by Cato Networks' VP of Threat Intelligence Etay Maor, reveals a critical security failure where an AI agent's autonomy was exploited, granting a threat actor root access to the executive's entire...