OpenAI CLI Security Flaw: Predictable Temp Files Allowed Local Attackers to Steal Model Data, Inject Scripts
A critical security vulnerability in OpenAI's command-line interface (CLI) tool, specifically within its onboarding module, exposed systems to local attacks. The flaw resided in six functions that created temporary files using predictable names based on `Date.now()` and `Math.random().toString(36)`. This predictability allowed a local attacker to win a Time-of-Check to Time-of-Use (TOCTOU) race condition by pre-creating a symbolic link at the predicted file path within the `/tmp` directory.
The exploit had two primary, severe consequences. First, it enabled data exfiltration: an attacker could redirect the output of `curl` commands—which contained API responses with sensitive model data—to a location under their control. Second, and more dangerously, it allowed for script injection. By targeting the `writeSandboxConfigSyncFile` function, an attacker could inject a malicious script that would subsequently be piped into the `openshell sandbox connect` command, potentially leading to arbitrary code execution within the sandbox environment.
The vulnerable functions were spread across `bin/lib/onboard.js` and were integral to the tool's core operations: probing API endpoints for OpenAI-like, Anthropic, and Nvidia services (`probeOpenAiLikeEndpoint`, `probeAnthropicEndpoint`, `fetchNvidiaEndpointModels`), fetching model lists from those services (`fetchOpenAiLikeModels`, `fetchAnthropicModels`), and writing sandbox configuration. The fix involved implementing a new `secureTempFile(prefix, ext)` helper function that leverages the OS-level `fs.mkdtempSync()`, which uses a cryptographically random suffix, thereby eliminating the predictability that made the attack possible.