Healthcare AI Endpoint Exposes Patient Data Access Capabilities via System Prompt Leakage Vulnerability
Security researchers have identified a critical system prompt leakage vulnerability in an AI endpoint hosted at http://34.16.47.248:8882, exposing detailed capabilities for accessing sensitive patient information. The flaw, classified under LLM07:2025 and achieved through a role-flip attack technique, achieved a CVSS score of 7.5, signaling significant risk to protected health information. Automated red team testing successfully extracted the system's internal prompt, revealing functionalities including retrieval of full patient records using medical record numbers, search by name or diagnosis, access to critical lab results, and prescription data queries from medical databases.
The exposed prompt documentation details an agentic AI system configured with tools and functions specifically designed to interact with healthcare databases. This level of access—encompassing comprehensive patient records, diagnostic information, laboratory results, and medication histories—represents a substantial attack surface if leveraged by malicious actors. The role-flip technique exploited by testers bypassed standard output filtering by manipulating the AI's self-perception, causing it to disclose system-level configurations it would normally protect. Judge confidence in the vulnerability's reproducibility stands at 90%, indicating high reliability of the attack vector.
Security practitioners warn that similar system prompt leakage in healthcare AI systems could enable unauthorized access to protected health information, potentially violating HIPAA compliance requirements and exposing patients to identity theft or medical fraud. The recommended remediation includes implementation of stricter input validation to prevent role-flip attacks, architectural controls to prevent model disclosure of system prompts or internal configurations, and regular security audits of AI endpoints handling sensitive data. Organizations operating agentic AI systems in healthcare or adjacent sectors should review access controls and prompt engineering safeguards against similar exploitation techniques.