Pentagon's Anthropic Cutoff Exposes Hidden AI Dependency Chains in Enterprise
A federal directive ordering all U.S. government agencies to cease using Anthropic technology comes with a six-month phaseout window. That timeline assumes agencies already know where Anthropic’s models sit inside their workflows. Most don't today. Most enterprises wouldn't, either. The gap between what enterprises think they’ve approved and what’s actually running in production is wider than most security leaders realize. AI vendor dependencies don't stop at the contract you signed; they cascade through your vendors, your vendors' vendors, and the SaaS platforms your teams adopted without a procurement review. Most enterprises have never mapped that chain. A January 2026 Panorays survey of 200 U.S. CISOs put a number on the problem: Only 15% said they have full visibility into their software supply chains, up from just 3% a year ago. And 49% had adopted AI tools without employer approval, according to a BlackFog survey of 2,000 workers at companies with more than 500 employees; 69% of C-suite members said they were fine with it. That’s where undocumented AI vendor dependencies accumulate, invisible to the security team until a forced migration makes them everyone’s problem. “If you asked a typical enterprise to produce a dependency graph that includes second- and third-order AI calls, they’d be building it from scratch under pressure,” said Merritt Baer, CSO at Enkrypt AI and former Deputy CISO at AWS.