The Food and Drug Administration (FDA) has issued guidance providing recommendations on what information should be included in a predetermined change control plan (PCCP) suitable for the commercialization of AI-based medical devices.
“This final guidance is part of the FDA’s broader commitment to developing and applying innovative approaches to the regulation of device software functions and contains recommendations to manufacturers to support iterative improvement through product modifications. ‘a software function of a device based on artificial intelligence while still providing a reasonable solution. guaranteeing the safety and effectiveness of the device,” the agency said in an article on Linkedin.
“This guidance is intended to provide recommendations for certain types of modifications that, at this time, FDA believes may generally be appropriate for inclusion in a PCCP for an AI-DSF. This guide is not intended to be a comprehensive list of changes the FDA may make. considers it appropriate to include it in a PCCP for an AI-DSF,” the agency wrote.
The agency suggests manufacturers take advantage of the Q-Submission program to obtain FDA feedback on a proposed PCCP for a device and submission type before submitting a marketing submission.
Although manufacturers are encouraged to discuss their plans through a pre-submission, the FDA does not sanction a PCCP during a pre-submission.
The agency states that when using an authorized PCCP to implement changes to the device, the manufacturer must update the device labeling as specified in the authorized PCCP.
“For AI-DSFs with an authorized PCCP, the labeling must explain that the device incorporates machine learning and has an authorized PCCP so that users are aware that the device may require the user to perform software updates and that these software updates may change the performance, input or use of the device,” the agency wrote.
According to the FDA, a section dedicated to describing changes in a PCCP must identify specific, planned changes to the AI-DSF that the manufacturer plans to implement. The explanation of the modifications must include specifications of the device characteristics and performance which, after the agreed confirmation and proof described in the modification protocol, can be implemented without a new marketing submission.
“To achieve these goals, FDA recommends that the description of modifications list the list of individual proposed device modifications discussed in the PCCP, as well as the specific rationale for the change to each part of the AI-DSF planned to be modified . In some situations, a description of modifications may consist of multiple modifications,” the agency wrote.
The agency says guidance documents mostly do not create legally enforceable responsibilities, but rather guidance describes the agency’s current thinking on a topic and should be considered recommendations, unless requirements specific regulatory or statutory requirements are not mentioned.
“The recommendations in this guidance apply to AI-enabled devices, including components of device-driven combination products, reviewed through the 510(k), De Novo, and PMA pathways,” the agency wrote.
“As technology continues to advance all facets of healthcare, medical software incorporating AI, including the subset of AI known as machine learning (ML), has become a integral part of many medical devices.
According to the FDA, these guidelines aim to provide a forward-thinking approach to encourage the development of safe and effective AI-based devices.
THE BIGGEST TREND
In October, the FDA released its views on the regulation of AI in healthcare and biomedicine, saying oversight must be coordinated among all regulated industries, international organizations, and the U.S. government.
The FDA has stated that it regulates industries that distribute their products to the global market and therefore U.S. regulatory standards must be consistent with international standards.
In August, the EU AI Act came into force, which sets out regulations for the development, placing on the market, implementation and use of artificial intelligence in the European Union.
According to the law, high-risk AI use cases include:
- implementation of technology in medical devices.
- use it for biometric identification.
- determine access to services such as health care.
- any form of automated processing of personal data and emotional recognition for medical or security reasons.
The council said the law aims to “promote the adoption of trustworthy, human-centered artificial intelligence while ensuring a high level of protection of health, safety (and) fundamental rights…including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union and support innovation.”