We write on software as a medical device (SAMD) for years, following the efforts of the Food and Drug Administration (FDA) to follow the rapid development of digital technology, such as the launch of the digital health center of excellence, The implementation of predetermined changes control plans and issuing various digital health guidelines on the software functions of the device, clinical decision assistance software, cybersecurity and other subjects. In anticipation of the workshop of medical intelligence medical devices / machine (AI / ML) of the FDA in October 2021, we published a brief history of the regulatory monitoring of the software agency via the regulatory framework of devices traditional medical established in the 1970s, in which we highlighted the dismissal of many challenges associated with such an approach. But now, with the rise of artificial intelligence and automatic learning and the proliferation of AI / ML compatible software throughout the health care industry, the FDA is faced with huge challenges using A regulatory framework deprived and obsolete to maintain safety and quality standards for such devices software. It becomes more and more clear that innovation in AI / ML and the digital health technology space is progressing rapidly, such as the FDA commissioner, Rob Califf, stressed it in many Recent public appearancesAnd that the traditional device frame quickly becomes impracticable for such technologies.
What are the challenges with the current framework for medical devices?
In general, the FDA has applied the same regulatory standards, in particular the classification of devices, authorization routes, marketing submissions and quality requirements, both hardware and software devices. The agency has little choice in this area, because the Federal Food, Drug and Cosmetic Act (FDCA) does not provide for regulatory paths or separate legal requirements for SAMD. Consequently, the FDA is obliged to find ways to adapt the existing system as much as possible within the statutory authorities granted by the US Congress to maintain a semblance of regulatory surveillance compared to modern software devices.
The absence of a separate regulatory framework, a risk classification system or a method of examination prior to SAMD has led to many challenges for the FDA, in particular with regard to the SAMD on the basis of ‘AI / ML algorithms that can change in real time when processing data. Before 2020, the FDA installed AI / ML algorithms in the traditional device process by forcing manufacturers to lock algorithms after having trained them on a cultivated data set and before the authorization and marketing of the product. The locking of algorithms has prevented their self-text in the field, because existing regulations oblige the manufacturer to specifically implement all the modifications and to examine whether these modifications require a separate authorization or approval of the FDA. Realizing that the requirement of locking algorithms for Samd was not a permanent solution, the FDA began to authorize the AI / ML comprehensive algorithms with predetermined changes (PCCP) control plans in 2021 and published a Advice project On the development of such plans and include them with marketing bids in April 2023. However, the use of a PCCP only addresses the specific post-market problem of the self-modification of algorithm; There is still no regulatory framework to minimize the risk of incorporating the bias into the algorithm at the design and training stages, AI / ML algorithm performance test before marketing or monitoring authorization post-commerce performance and results for patients.
Why can’t the FDA use a different regulatory route for AI / ML SAMD compatible?
In short, the FDCA establishes only three ways for a medical device (other than class I or other exempt systems) to obtain marketing authorization: prior notification (section 510 (K)), prior approval (section 515) and classification of Novo (de Novo (the classification of Novo (section 513 (f) (2)). The FDCA therefore does not grant the FDA to develop and implement new authorization routes for any type From medical product. Robust and commitments to organizational excellence in order to rationalize the regulatory examination of the SAMD products agency. Finished in September 2022And the agency admitted that legislation would be necessary to implement a new paradigm for regulatory monitoring of manufacturers of medical devices and their products.
What could the FDA do without legislation authorizing a new regulatory course?
It is very unlikely that the congress will adopt legislation granting the new FDA authorities to implement additional regulatory routes or substantially modify the regulations of other ways. Even without these legal changes, however, the FDA has a certain flexibility to establish new requirements within the framework of the traditional apparatus. For example, the agency has an almost total discretion on the types of information that a sponsor must include in a regulatory submission for marketing authorization. In addition, the FDA has the general power to impose post-commercial requirements during marketing authorization, including requirements to conduct additional clinical trials or establish patient registers to monitor performance and results of the devices . The FDA could use these methods to increase the pre and post-marketing requirements on AI / ML compatible SAMD manufacturers to the ultimate objective of helping to ensure adequate quality controls and patient safety.
For example, the FDA could force manufacturers to carry out in-depth pre-commercial tests of the AI / ML compatible SAMD against models that use real data and demographic information. Although the safety and efficiency of a class III device must be demonstrated in a pivot clinical trial before receiving the approval of the FDA, class I devices of class II and not exempt often require tests non -clinical performance to obtain marketing authorization. Even when a clinical trial is necessary, subjects of subjects often lack diversity and may not adequately test the performance of an AI / ML algorithm or to identify latent inherent biases. To complete the current test obligations, the FDA could force manufacturers to carry out silico tests against real world models with real anonymized data to validate the performance of the algorithm and help to identify the problems or weaknesses of performance. The agency can simply make these tests a requirement for any type of submission for the marketing authorization of a SAMD compatible AI / ML product.
Another progressive regulatory control that could be imposed is one or more studies of human factors to determine how human operators use an AI / ML SAMD in clinical conditions with the pressures resulting from the clinical environment. The FDA often expresses the concern that health professionals can simply trust the clinical results of a SAMD algorithm which is a non -transparent black box without seeking competition in other sources of information or diagnostic or methods Alternative treatment, even if the SAMD output can be inactive. Require that the manufacturers of devices carry out human factors tests with real health professionals in a simulated clinical environment would help to identify these problems and allow manufacturers to solve them by special controls, such as training for clinicians Or the disclosure requirements specified in relation to SAMD production.
As the latest example, the FDA could implement data monitoring and data and data data reports after the market for SAMD compatible AI / ML, which would help ensure that the manufacturer and the agency are aware and can Quickly solve emerging problems with the product. The complex and modifiable nature of the AI / ML compatible SAMD compatible, even those under a CCPP, means that constant monitoring of performance in the field is necessary to identify risks and ensure safety, in particular for the diagnostic and therapeutic SAMD involving higher risks for users or patients. The FDA also works to develop the National assessment system for health technology (Nest)This will ultimately allow the performance and monitoring of patient results on traditional devices and SAMD, but the implementation of such a system is probably still in the years.
Conclusion
The push to extend the use of AI / ML software capable of self-modern during the provision of health care has already started, and it becomes more and more clear that the regulatory framework of current device is not well suited To ensure that the risks involved in the development, validation and use of these software is effectively identified and attenuated. This is not the FDA’s fault since FDCA limits what the agency can do to assess and authorize SAMD products. In the past, the FDA has developed creative solutions using its statutory authorities of existing device to allow the examination of the risks associated with autonomous software devices (for example, in evolution of cybersecurity and PCCP requirements). We expect the FDA to be able to adapt more to the rapid development field, by implementing tests, additional submission and post-commercial requirements for SAMD AI / ML, like those we suggest above. However, we hope that the Congress will act at a given time to give the FDA an additional authority to classify, authorize and regulate AI / ML devices in a way that corresponds to technology, allows and incites innovation and improves the patient safety.
Subscribe to views