Anonymous Intelligence Signal

Anthropic's Claude AI Deployed in U.S. Covert Campaign Against Iran, Internal Feud Revealed

ai The Network unverified 2026-03-06 12:13:24 Source: Unknown source

Anthropic's AI assistant Claude has been identified as a central tool in a U.S. government-led information campaign targeting Iran. The operation, described as a psychological and influence campaign, reportedly utilizes Claude to generate and disseminate content aimed at shaping Iranian public opinion and undermining the regime's stability. The deployment of a commercial AI tool for state-level information warfare marks a significant escalation in the weaponization of generative AI.

Sources indicate the campaign is run amidst a "bitter feud" within Anthropic's leadership and between the company and U.S. national security agencies. The internal conflict reportedly centers on ethical concerns, the potential for mission creep, and the long-term geopolitical consequences of allowing a general-purpose AI to be used for covert offensive operations. There is significant tension between factions advocating for strict safety guardrails and those prioritizing national security partnerships and contracts.

The use of Claude suggests a shift from traditional cyber tools to AI-generated, personalized, and adaptive content for mass influence. The campaign likely involves generating persuasive text, fake social media personas, and tailored narratives in Farsi. This raises profound questions about corporate complicity, the erosion of AI safety principles for geopolitical gain, and the precedent set for other AI firms being co-opted by intelligence services. The feud highlights the fragile balance between AI development, commercial interests, and state power in an era of algorithmic conflict.