Security Flaw: AI Sandbox Runs LLM-Generated Code as Root User, Elevating Container Escape Risk
A critical security misconfiguration has been identified in an AI code execution sandbox. The system, designed to run arbitrary Python code generated by Google's Gemini LLM, executes all code with full root privileges inside its Docker containers. This fundamental violation of the principle of least privilege dramatically expands the attack surface, turning any potential container escape vulnerability into a direct path to host-level system compromise.
The sandbox controller explicitly drops all Linux capabilities (`cap_drop=["ALL"]`) in an attempt to harden the environment. However, the core process itself still runs as the `root` user. This configuration flaw, located in `sandbox/controller/app.py`, means that any escaped process inherits the highest level of privilege within the container. It grants the ability to write to protected paths and interact with system-level resources, effectively nullifying the security benefit of capability dropping.
The risk is acute. If an attacker can exploit a vulnerability to break out of the containerized environment, they immediately gain root access on the underlying host. This flaw also amplifies the danger of future configuration errors, such as accidentally mounting sensitive host directories. The suggested remediation is straightforward: create a dedicated non-privileged user (e.g., `sandbox`) in the runner's Dockerfile and enforce its use at runtime, a basic but currently missing security control.