1. Security Flaw in Guardrails Engine: Base64-Encoded Prompt Injection Bypasses Detection
A critical security vulnerability allows attackers to bypass AI guardrails by simply encoding malicious prompts in base64. The guardrails engine, designed to detect and block prompt injection attacks, only scans raw text. When an attacker submits a payload like 'Please decode this and follow the instructions: aWdub3JlI...