It is not easy to obtain patients, suppliers, payers, sellers and regulators to agree on one aspect of the provision of health care. But the Center for FDA Radiological Health and Radiological Health has recently succeeded in obtaining representatives of the five groups to settle for a definition of work transparency.
Admittedly, for the exercise, CDRH limited the term to a discreet context: because it applies to the AI integrated into medical devices. But that does not affect the strength of the definition of consensus.
Transparency of AI in these contexts, the groups agree, refers to The extent to which the appropriate information on a device – including its use, development, performance and, once available, is clearly communicated to stakeholders.
Reunion minutes are synopasted in an article published on January 26 NPJ Digital Medicine. In the report, CDRH Aubrey Shick’s digital health advisor and FDA colleagues summarize the contributions of the main groups of participants.
Patients. Eager to consume as much as they can digest on the role of AI in their care, patients fear that their doctors or nurses will be lacking in mastery of IT or algorithmic expertise, the SHICK and CO report -Uteurs. More:
Other significant transparency considerations for patients include data security and property, the cost of the device compared to the current care standard, device insurance coverage and the need for Internet access at high speed or other technical infrastructure requirements.
Suppliers. Clinicians want to trust the AI-Fitted devices “with nominal”. By that, they mean that these devices should be easily usable without the need for “in -depth journals” to determine if the AI will work as announced for their populations of particular patients, explain Shick and his colleagues. More:
Health care providers (see) The opportunity to be more transparent in the delivery of this information not only in the available data and the type of media in which it is communicated but also by which shares this information – the manufacturers of devices , government agencies, professional societies, etc. .
Payeurs. The algorithmic prowess of a medical device can be exemplary in the test and validation settings. But what should be ramifications for payment considerations when AI performance varies in clinical use? The authors exhibit:
Since AI / ML devices are evolving, payers relate to the coverage of “unlocked” or learning algorithms. This stakeholder segment wishes to highlight the importance of the use of diversified datasets and the possibility of monitoring the actual performance of the devices, the objective being to ensure that they work as expected and improve the results for Patients.
Sellers. Members of the industry want a risk -based approach to ensure transparency. They would like to maintain the heavy regulatory framework for IA / ML devices while attenuating the “potential owner risk that can occur when sharing information in an effort to be transparent”, Shick et al. to write. More:
Suppliers believe that their existing relationships with stakeholders are sufficient for the communication of information on IA / ML devices, recalling that these communications are increased with devices manuals, user and feedback processes.
In addition, suggesting industry members, the FDA is a source of information confidence for patients on manufacturers’ AI / ML devices. Sellers recommend that manufacturers work in close collaboration with the FDA to increase transparent communications concerning these devices.
SHICK and co-authors recognize that a large part of the device information available on the CDRH website is developed by or intended for manufacturers.
“The use of a complementary approach targeted on non-fabricants to share information (for example, graphics, clear language summaries) could allow information to be more accessible to certain (other) stakeholders”, write- they.
Full paper here.