

The most safety-conscious AI company in tech just got kicked out of federal healthcare systems. Not for a security flaw, but for refusing to let the Pentagon use its tools without guardrails. The fallout for drug development could set the FDA back 18 months.
Imagine getting fired for being too responsible. That's basically what just happened to Anthropic's Claude.
The U.S. Department of Health and Human Services has ordered employees to stop using Claude, Anthropic's AI model, across all federal healthcare systems. Access has been disabled. Staff are being told to switch to ChatGPT or Google's Gemini instead. The phase-out is part of a broader federal crackdown that started when President Trump designated Anthropic a "supply chain risk" back in February 2026.
The twist? Anthropic is widely considered the most safety-conscious AI company in the industry. The designation, historically reserved for foreign adversaries like Huawei, has never before been applied to an American company. So what went wrong?
The answer has nothing to do with safety flaws and everything to do with a contract dispute.
In July 2025, Anthropic landed a landmark $200 million Department of Defense contract. Claude was approved for use on classified government networks. It was a massive win, but it came with two conditions Anthropic insisted on: no mass surveillance of American citizens, and no fully autonomous weapons without human oversight.
Those were the red lines. The Pentagon eventually wanted them erased.
When the DoD demanded Anthropic agree to "all lawful use" of its technology (effectively removing the guardrails), CEO Dario Amodei refused. The response was swift and punishing. On February 27, Trump posted on Truth Social directing all federal agencies to stop using Anthropic's tools, with a six-month phase-out window. Days later, Defense Secretary Pete Hegseth slapped Anthropic with the supply chain risk label.
Think of it like a restaurant refusing to serve a dish without allergen warnings, and the health department shutting them down for it. Anthropic didn't fail a safety test. It passed one the government didn't want it to take.
For biotech and drug development, this isn't just a political story. It's a workflow catastrophe.

Jeito Capital just closed the largest fund ever raised by an independent European biopharma investor: $1.2 billion. It's a massive bet that Europe can stop watching its best biotech companies get scooped up by American money.


Join thousands of biotech professionals who start their day with our free, daily briefing.
At the FDA, Claude had become a critical tool for scientific work. One FDA employee, speaking anonymously, said the ban "will basically wipe out 18 months of efforts."
The FDA's own AI platform, called Elsa, was originally built on Claude. It's now being forced to transition to Gemini as its primary model, though both remain temporarily available during the switch.
The problem? Not all AI models are interchangeable. That same FDA employee dismissed Gemini as suitable for "chatbot" tasks like mimicking social media posts, not "in-depth analysis of complex scientific study reports." It's like replacing a surgical scalpel with a butter knife and calling it an upgrade.
NIH, CDC, and CMS all fall under HHS, which means they're subject to the same directive. The full extent of disruption across those agencies isn't yet clear, but every sub-agency that touched Claude is now scrambling for alternatives.
The supply chain risk designation isn't just a label; it carries real legal teeth. The DoD invoked 10 U.S.C. § 3252, a statute designed to exclude risky entities from defense acquisitions, along with the Federal Acquisition Supply Chain Security Act of 2018. Under these frameworks, contractors must certify they aren't using Anthropic products and conduct compliance inventories.
The designation forces every government contractor working with defense systems to audit their AI tools. If you're a biopharma company running federally funded research that relies on Claude, you now have a compliance headache that didn't exist two months ago.
Anthropic filed lawsuits on March 9 challenging the designation as "unprecedented and unlawful," calling it retaliation. A court injunction has temporarily blocked some of the broader fallout, but the HHS phase-out is already underway.
This story is bigger than one AI model. It's about what happens when the government punishes a tech company for negotiating contract terms.
Major trade associations including TechNet, the Software and Information Industry Association, the Information Technology Industry Council, and the Computer & Communications Industry Association have all filed legal briefs supporting Anthropic, citing a "ripple of uncertainty" across the tech sector. Even scientists from OpenAI filed an amicus brief in their personal capacities stating they didn't believe the supply chain risk designation should have been applied to Anthropic.
The implicit message to every AI company is uncomfortable: push back on government demands, and you might get blacklisted. Accept whatever terms are offered, or risk losing access to the entire federal market. For an industry that talks constantly about responsible AI development, this is a stress test with real consequences.
The pharmaceutical industry has been pouring money into AI tools for drug discovery, clinical trial design, and regulatory submissions. The FDA itself has documented a significant increase in drug applications that incorporate AI components. Federal programs like NIH's Bridge2AI initiative and the White House's "America's AI Action Plan" have been actively encouraging this adoption.
Now, the ground rules are shifting. Companies that built workflows around Claude (whether for internal research or federally funded projects) face a choice: migrate to approved alternatives that may not perform as well, or maintain separate AI stacks for government and private work. Neither option is cheap or simple.
The six-month phase-out clock is ticking. Contractors are inventorying their AI usage. And the broader question looms: if the most safety-focused AI company in the world can be labeled a national security threat, what does "approved" even mean anymore?
For biopharma teams that depend on federal grants and regulatory pathways, the answer matters more than most people realize.
Sidewinder Therapeutics just raised $137 million without ever dosing a patient. Its bispecific ADC platform has big pharma's venture arms lining up, and the math behind the bet says a lot about where oncology dealmaking is headed.