Critical Security Gap: AI Agent Framework Lacks Responsible Disclosure Policy for Shell Hook Attack Surface
A critical security audit has flagged a major vulnerability in a widely used AI agent framework: the complete absence of a formal responsible disclosure policy. The framework's architecture, which executes custom shell hooks on every agent tool call and writes directly to user filesystems, presents a significant attack surface. Without a `SECURITY.md` file or a defined private reporting channel, there is no secure, official path for security researchers to report discovered vulnerabilities, leaving potential exploits unaddressed and users at risk.
The audit finding, categorized under the critical RHS Dimension 3, specifically targets the framework's execution of `pre-tool-use.sh` and `post-tool-use.sh` hooks. These scripts run with each agent operation, creating a broad and dangerous vector for privilege escalation or arbitrary code execution if a vulnerability is found. The acceptance criteria for remediation are explicit: the project must immediately create a `SECURITY.md` file following GitHub's advisory template, define supported versions, provide a private disclosure email or GitHub Security Advisory link, and enable private vulnerability reporting in the repository settings.
This oversight places all downstream users and integrators of the framework in a precarious position. The lack of a coordinated disclosure process means security flaws could be weaponized before maintainers are even aware, or conversely, that researchers may publicly disclose issues without warning. The required fixes—updating `README.md` and `CONTRIBUTING.md` to reference the new policy—are procedural, but their absence signals a foundational gap in the project's security maturity, demanding urgent institutional attention from its maintainers.