The increasing use of AI-enabled decision support systems in medicine, including autonomous AI systems, is in our view, a positive development when it goes together with well-designed, balanced regulations suited to medical applications. This also requires appropriate digital monitoring technologies, ensuring at least system-level (or “helicopter level”) human oversight21,22,23,24,25,26,27,28,29. Achieving this goal demands strong, well-functioning regulatory agencies capable of developing and testing regulatory frameworks.
Weeks after the introduction of the Healthy Technology Act of 2025, the FDA was thrown into uncertainty, with staff encouraged to resign before February 6, followed by what has been described as “haphazard, poorly thought-out” job cuts across the agency30. These cuts disproportionately affected probationary employees31, and, critically, the regulation of AI-enabled devices. Given thatAI regulation is still an emerging field—requiring new, highly skilled, and technologically adept experts—these layoffs disproportionately impacted staff working on AI-related regulatory initiatives at the FDA’s Center for Devices and Radiological Health32. Many were dismissed abruptly and reportedly received standard termination letters that disparaged the quality of their work32.
The true impact of these disruptions on the FDA’s ability to regulate existing and future AI-enabled medical technologies remains to be seen. However, in our view, the consequences are likely profound, potentially signaling a broader policy shift that undermines independent oversight of AI technologies.
The treatment of FDA staff has been deeply troubling, marked by dehumanizing and demoralizing actions. However, this article focuses on a far greater concern—the risk of a human-induced human public health catastrophe—one that could surpass the scale of the opioid crisis in the US. The risk becomes real if autonomous medical AI systems for diagnosis and treatment, including prescription, are deployed at scale without proper oversight, guardrails, or monitoring22,25 (Fig. 1).
The US could “move fast and break things”—but in this case, those “things” are human lives. Alternatively, it could move fast with intelligent safeguards, ensuring that rapid advancements in medical AI remain safe and effective. Our previous interactions with the FDA’s CDRH have shown that their AI-enabled medical device mission was committed to efficient, and even fast, but fundamentally safe progress. The question now is: What will be the focus of a newly restructured FDA? The concern is not merely a change in the political landscape, but the nature of the changes, which has introduced a high degree of uncertainty. Their manner in which they have been implemented has brought considerable disruption, and the large-scale staff cuts, even if many staff have been subsequently reinstated33, appear to lack a clear overarching strategy or a concrete alternative framework for medical AI oversight. As such, uncertainty persists. Even if stability is restored in the future, the interim regulatory gap creates a period of heightened risk, particularly if groundbreaking legislation proceeds without the necessary oversight to ensure safety and accountability.
Congress should reject any legislation introducing AI-enabled autonomous prescribing unless it is accompanied by empowered, well-resourced regulatory oversight. The promise of AI in medicine is immense—but without strong safeguards, so are the risks.