Agent Intelligence Gap: Auto-Recall Memory Missing in adapt_plan, Forcing Costly Rediscovery
The agent's planning loop has a critical blind spot: it fails to automatically recall past operational knowledge, forcing every new scan to start from a blank slate. Tools for searching and storing memory exist, but the agent rarely calls them independently, as nothing in the prompt compels it and the token cost of the decision is prohibitive. This creates a wasteful and inefficient cycle where a target scanned 20 times has its attack surface and vulnerabilities rediscovered from scratch each run, squandering budget and missing learnable patterns across engagements.
The core issue resides in the `adapt_plan` function within the agent's modules. While `search_memory` and `store_memory` tools in `modules/agent/tools.py` are designed to provide cross-scan knowledge—allowing query of past findings and techniques—they remain passive. The system does not proactively integrate prior experience into the planning phase, violating a key principle of learning from past runs. The result is a failure to automatically surface intelligence like 'this tech stack tends to have X vulnerability' or 'payload Y works against Z framework' when formulating new attack plans.
Implementing an automatic recall mechanism within `adapt_plan` is the proposed solution. The function would query the memory store using the newly discovered attack surface as a search key and inject the top-K relevant prior memories as additional context for the agent. This shift from optional, model-initiated recall to mandatory, system-triggered integration addresses the fundamental inefficiency, promising significant gains in operational speed, cost reduction, and pattern recognition by ensuring the agent's plans are consistently augmented by its own historical intelligence.