AI Guardian Security Flaw Allows Bypass of Immutable Enterprise Policies via Remote Config Injection
A security vulnerability in AI Guardian enables users to circumvent enterprise-deployed immutable policies by injecting their own remote configuration URLs. The flaw, identified in the `_load_remote_configs()` method within `src/ai_guardian/tool_policy.py`, stems from how the system merges remote configurations from multiple sources without enforcing a strict priority hierarchy. When a user adds a remote config URL to their local `~/.config/ai-guardian/ai-guardian.json` file, that configuration can override policies the enterprise marked as immutable—effectively nullifying security controls the organization deployed at the system level.
The issue becomes critical in enterprise environments relying on immutability flags to enforce security boundaries. An organization deploying a locked-down policy through `/etc/ai-guardian/remote-configs.json` expects those restrictions to hold regardless of user-level configuration. However, the current merge logic loads remote URLs from system config, environment variables, user config, and local config simultaneously, with user-supplied URLs able to take precedence. An attacker—or simply a power user—can therefore load a rogue remote configuration that bypasses SSRF protection, content filtering, or other safeguards the enterprise intends to enforce.
The proposed remedy introduces cascading priority for remote config sources, ensuring that enterprise-deployed URLs load with higher precedence than user-level sources. This would restore the intended security boundary between centrally managed policies and individual user overrides. Until a fix is implemented, organizations using AI Guardian in controlled environments should audit local config files for unauthorized remote URLs and consider restricting user-level access to the configuration hierarchy.