AI And Human Accountability In Healthcare

AI is expansively being used in healthcare to analyze data, decrease medical error, and even provide virtual assistance to scientists and healthcare professionals. Much of the time, AI accuracy surpasses that of its human counterparts and can therefore go unquestioned. But what happens when the technology malfunctions? Who is to blame and how can these errors be eliminated?

“It’s not clear who would be responsible because the details of why an error or accident happens matters. That event could happen anywhere along the value chain,” said the Director of the Ethics and Emerging Sciences Group at the California Polytechnic State University, Patrick Lin.

Some blame for AI error can often be partitioned off based on the stage in which the malfunction occurs. If the system malfunctions on a mechanical level, for instance, one can look to software or hardware developers for the cause of disturbance. While misuse in a hospital environment would place blame on the authority in charge of training procedures.

One of the largest contributors of potential errors, however, comes from the possibility of learning systems being fed biased data, which obscures the outcome of its data analysis. In this case, the line of responsibility becomes blurred.

For instance, if the data presented to an AI learning system over-represents the white population, it may misdiagnose a person of color. A doctor may take the medical prediction of the AI system at face value in this case, while in reality, the accuracy of the output is skewed due to the inconsistent or biased data. But, because AI cannot itself reason how its conclusions are made, it is often hard to understand where along the process line something went wrong when and if an error does occur.

Consistency is also a potential source of problems. An AI robot may perform to acceptable standards in a lab setting, and then come in short with real-life application. Such was the case with Google’s AI system last year, which predicted diabetic retinopathy with a 90% accuracy in a lab setting, only to cause a plethora of delays and errors when applied in a hospital environment.

No matter the cause, because AI is always paired with human interaction, there is bound to be room for error. However, this could also be the symbiosis needed to ensure the success of AI in the future. As humans learn to better translate the human experience to data-collection and technological application, the thin margin of error within the AI tech realm will only continue to shrink.