|
|
|
The FDA’s newly formed Digital Health Advisory Committee (DHAC) only held its first meeting last week, but it has already put its thinking into writing.
And it turns out its goal is to make sure the agency closely monitors medical devices equipped with GenAI throughout the life of those products.
This is clear from reading the 30-page document committee members received before their inaugural meeting in November. Here are excerpts, organized in response to some questions AIin.Healthcare would have liked to ask if we had been there.
Why is a Total Product Lifecycle (TPLC) strategy essential for monitoring GenAI-equipped medical devices?
The FDA’s long-standing commitment to a TPLC approach has become increasingly relevant for medical devices incorporating technologies intended to iterate faster and more frequently than ever before over the life of the device. a device.
“A TPLC approach will likely remain important for managing future safe and effective GenAI-enabled medical devices.”
How does the FDA’s TPLC approach relate to AI Lifecycle Model?
In general, considering the FDA’s AI lifecycle for GenAI-enabled devices (and AI-enabled devices in general) can be an important way for manufacturers to approach the management of their devices throughout along the TPLC.
“In addition, the AI lifecycle can be used as a useful model to identify areas where new techniques, approaches or standards may be needed to ensure adequate management of these new technologies across the TPLC .”
Let’s go back to basics for a minute. How does the FDA define “GenAI”?
GenAI refers to the class of AI models that mimic the structure and characteristics of input data to generate derived synthetic content and can include images, videos, audio, text, and digital content.
“GenAI models can analyze input data and produce contextually appropriate results that may not have been explicitly visible in their training data. »
How is GenAI similar to (and different from) traditional AI/machine learning?
Like other AI/ML models, GenAI models are frequently developed on datasets so large that human developers typically cannot learn everything about the contents of the dataset during development.
“Unlike datasets used to develop other AI/ML models, datasets for GenAI model development may be intentionally broad and not initially tailored to a specific task.”
What makes GenAI particularly difficult to regulate?
At times, GenAI’s ability to tackle diverse, novel, and complex tasks can contribute to uncertainty about a device’s production limits.
“When not adequately controlled, this uncertainty can result in difficulties in confirming the limitations of a device’s intended use, which can pose challenges to FDA regulation of GenAI-compatible devices. “
Why is it important that many GenAI models are base models?
Core models are trained on a wide range of data and can be widely applied to many AI applications to undertake a myriad of tasks.
“If a manufacturer uses a base model or other GenAI tool as part of a product whose specific use meets the definition of a medical device, the product that leverages the base model may be subject to FDA regulatory oversight of devices. »
What is the best way to avoid rejection of a GenAI product by the FDA?
Sometimes it may be helpful for manufacturers and developers to consider that a GenAI implementation of a product may not be beneficial to public health. This may be the case when the implementation could provide erroneous or false content.
“It is useful for manufacturers and developers to determine when GenAI may or may not be the best technology for a specific intended use.”
In the future, notes the FDA, the performance assessment methodologies necessary for proper monitoring “will be governed by the specific intended use and design of the GenAI-compatible device, some of which may require the formulation of new performance measures for certain intended uses.”
“As with all devices, the body of evidence, which may include pre- and post-market evidence, can support reasonable assurance of the safety and effectiveness of these devices across the TPLC.”
Read the full report.
|
|
|
|
|
|
|
|
|
|
Significant developments in recent days.
- A major American healthcare company has ambitious global plans. And AI is at the forefront. At an investor event last week, GE HealthCare announced plans to integrate AI into every medical device it makes over the next eight years. The $18 billion Chicago-headquartered company said the vision is part of its D3 strategy, which merges a digital framework with diverse products so that together the components focus smart medical devices on specific pathological conditions. At the meeting held at Nasdaq in New York, Chairman and CEO Peter Arduini reminded attendees that the company spun off from its historic parent company, General Electric, in early 2023. “We We are confident in our progress since (the) split and our path to accelerate growth through an exciting innovation pipeline,” Arduini said. He highlighted the company’s stated goal of helping “create a world where healthcare has no limits.” Company Coverage here.
- AI developers have a sophisticated new option for creating healthcare-specific applications. The openness comes courtesy of Google Research, which this week introduced a suite of open foundation models. Calling the suite Health AI Developer Foundations, or “HAI-DEF,” the company says its health AI team will initially focus on supporting imaging-based applications for radiology, dermatology and pathology. In providing such resources, two software engineers write in a blog post“We aim to democratize the development of AI for healthcare, enabling developers to create innovative solutions that can improve patient care.”
- FDA Commissioner Robert Califf recently suggested the agency may need to double its staff. And that’s just to supervise the AI. This eyebrow-raising opinion has at least one staunch supporter. “In my emergency department, we use AI to prioritize patients based on likelihood of admission,” writes Yale emergency physician and professor Cristiana Baloescu, MD, in Page Med today. “Even though this eases patient flow, it can miss complex cases. For now, she adds, medical staff are “maintaining significant oversight, meticulously double-checking AI-generated recommendations.” This approach may not be sustainable, she notes, given the proliferation of AI-equipped medical devices in addition to their long service life. How to finance a major expansion of the FDA workforce? Start with Congressional budget allocations, Dr. Baloescu suggests, and add shares from fees on AI-equipped devices as well as contributions from AI companies. Listen to her.
- Minerva was a tough act to follow, but this should do it. The Mount Sinai Health System in New York is opening a sparkling new research center focused on AI in health care. Housed in a 12-story, 65,000-square-foot facility near Central Park, the Hamilton and Amabel James Center for Artificial Intelligence and Human Health will house approximately 40 principal investigators, 250 graduate students, and a number of postdoctoral students . fellows, computer scientists and support staff. In presenting the inauguration ceremony, the institution suggests that the center shares a lineage with “Minerva”. That’s the name Mount Sinai gave its first-generation supercomputer in 2013. Dennis Charney, MD, director of the Icahn School of Medicine, says the new center “will produce transformative discoveries in human health through the integration of research and data, fostering collaboration between multiple programs under one roof. Announcement.
- Medical scribes are really nothing new. It’s just that before ambient GenAI, scribes were humans transcribing doctors’ recordings with their fingers on keyboards. This option can be easy to forget these days, given all the competing AI dictation products vying for attention. And yet, it turns out that some doctors still prefer the old way, at least sometimes. Vandana Ahluwalia, MD, a rheumatologist in Brampton, Ontario, is one of them. She tell According to the Canadian Broadcasting Corporation, she appreciates how her talented employee “highlights key points from a patient’s previous visits, something current AI tools cannot do.” She likes that he also does other administrative tasks in the office. AI can’t match that. Again.
- And speaking of our northern neighbors. The top five healthcare AI startups in Canada are Acto, Clarius, AbCellera, Benchsci and BlueDot. More about this here.
- How likely is it that AI won’t cause an earth-shattering calamity? Little. That’s what Siddhartha Mukherjee, MD, a cancer researcher at Columbia University and author of the Pulitzer Prize-winning book, believes. The Emperor of All Diseases: A Biography of Cancer. “I think it’s almost inevitable that, at least in my lifetime, there will be some version of a Fukushima AI,” he says. The Guardian. Fukushima is of course the catastrophic nuclear accident caused by the Japanese tsunami of 2011. And with that, we have been warned.
- Elon Musk will not be the Trump administration’s AI czar. But someone will. This person will be responsible for focusing public and private resources to keep America at the forefront of AI, Axios reports, citing sources within the Trump transition team. Mike Allen, co-founder of Axios adds that the roles of AI and cryptography could be combined under a single “emerging technology” czar.
- Recent news searches:
- Funding news to note:
- From AIin.Healthcare press partners:
|
|
|
|
|
|
|
|
|