Examples for ANI do exist today, but there are major issues the A.I. developer and the medical community have to face and tackle before A.I. can become mainstream in medicine.
Explainability
Medical professionals tend to make decisions using data that were obtained with technologies they either understand or understand the basics enough to trust it. In the case of A.I., it might not be possible. However, millions of learned parameters (the connection weights within the network) determine the output of a deep neural network, which makes it unintuitive to understand the decision process. Even if we visualize the sensitivity of different parts of a network and browse these thousands of noisy images, we will still not see easy-to-grasp learned rules. Reasoning is not a by-product of the algorithm. Thus, explainable A.I. would be crucial in providing insights into A.I.-based algorithms enough to gain trust in them.
Augmented intelligence
This is a term often promoted by organizations such as the American Medical Association. It focuses on A.I.’s assistive role in healthcare emphasizing that A.I.‘s design enhances human intelligence rather than replaces it. It also refers to the value an A.I. can provide that comes from how we can combine the unique capabilities of human experts with those of A.I. to provide better care for patients. A similar term that is related to augmented intelligence is “human-centered A.I.” which explores the need for the development of A.I.-based systems that learn from and collaborate with humans in a deep and meaningful way.
Quality and quantity of data
A.I. feeds on data. The more and better quality data it gets access to, the more it can excel at tasks. Advanced algorithms need annotated data to make sure those can learn the task they were designed for. There are medical professionals who act as data annotators which is a time-consuming and monotonous task. Some medical algorithms can only improve through large amounts of annotated data. Therefore the dedicated contribution of data annotators is of crucial importance for the benefit of implementing A.I. in the healthcare setting. Therefore, we can conclude that data annotators are the unsung heroes of the medical A.I. revolution27.
Privacy issues
Medical A.I. needs access to medical records, data from health sensors, medical algorithms, apps and whatever source of information it can learn from. The data can come from healthcare institutions or from individuals. Even if institutions make data anonymized, it was proven in many cases that individual profiles can be traced back.
Legal issues and liability
What if a deep learning algorithm misses a diagnosis, the doctor accepts the judgment and the patient suffers from the consequences? What if an autonomous surgical robot injures a patient during a procedure? It is an ongoing debate about who will be held liable in the future when robots and A.I., acting autonomously, harm patients. Current consensus states that the professional is open to liability if he or she used the tool in a situation outside the scope of its regulatory approval, or misused it, or applied it despite significant professional doubts of the validity of the evidence surrounding the tool, or with knowledge of the toolmaker obfuscating negative facts. In any other cases, liability falls back on the creators and the companies behind them.
Trust
We will need a lot of time to trust an autonomous car, to see how it reacts in situations we are familiar with or whether it makes similar decisions in an emergency. Consequently, it will take even more time not only for patients but for medical professionals too to trust A.I. with medical diagnoses, supporting medical decision-making or designing new drugs. This should be taken into consideration when we decide to adopt the technology into the healthcare setting.
Biased A.I.
A study concluded that commercial companies’ facial-recognition systems were more accurate on lighter-skinned individuals by 11–19%. Those produced especially inaccurate results when identifying women of color. In another example, A.I. was implemented in the United States’ criminal justice system in order to predict recidivism. They found that the algorithm predicted disproportionately high probability of black people committing future crimes, no matter how minor their initial offense was. It is not only racial prejudice, but A.I. algorithms also often discriminate against women, minorities, other cultures, or ideologies. For example, Amazon’s HR department had to stop using their A.I.-based machine learning tool which the company developed for sorting out the best job applicants, as it turned out that the smart algorithm favored men. As such algorithms learn from the data they are fed with, A.I. programmers must know about the issue of bias in algorithms and actively fight against it by tailoring them28.
Patient design
When designing algorithms for medical purposes, patients should be involved on the highest level of decision making to make sure their needs are met, and issues and recommendations are built into the technology. An example about its importance is how a start-up developed an algorithm that could detect signs of Alzheimer’s disease in phone calls of patients in Canada. However, it showed different results with patients who had a French accent. By inviting patients at the early stages of development, such issues could be avoided.
As there are positive ongoing efforts for solving each of these, it is still an open question whether those algorithms that become a common part of medical practices would be able to address them all29.