OpenAI's health AI policy wishlist draws fire: 'Trying to have their cake and eat it too'
OpenAI faces mounting criticism over its healthcare AI strategy, with one prominent health policy expert accusing the company of pursuing contradictory goals. David Blumenthal, former national coordinator for health IT and current professor at Harvard University, said OpenAI is attempting to position itself as a responsible actor while simultaneously keeping markets open for its commercial products. "They're trying to have their cake and eat it too," Blumenthal told STAT, describing the company's approach as fundamentally conflicted.
The controversy centers on a policy blueprint OpenAI released alongside its ChatGPT for Clinicians launch last month. The document outlines proposals the company claims would unlock AI's potential to transform the broader healthcare system. However, health policy experts contend the recommendations disproportionately benefit OpenAI itself. The company's recent healthcare push has included ChatGPT Health for consumers in January, followed by ChatGPT for Healthcare for hospitals, and now the clinician-focused product. This sequence of launches suggests a deliberate strategy to establish footholds across different segments of the healthcare market before regulatory frameworks solidify.
The implications extend beyond one company's public relations strategy. As AI developers increasingly weigh in on health policy, critics warn that industry-crafted proposals risk shaping regulation to serve commercial interests rather than patient safety or system-wide benefit. OpenAI's approach illustrates the broader tension in health AI governance: companies seeking to influence the rules while maintaining access to markets they stand to dominate. The outcome of this policy debate could determine how aggressively regulators constrain AI deployment in clinical settings, and whether established health institutions retain meaningful input in that process.