Mayo Clinic’s Platform_Validate Looks to Eliminate AI Model Bias in Healthcare

Amid concerns that some AI models carry with them specific biases that could be hugely damaging to their credibility and compromise their use, the Mayo Clinic has come forward with a new solution for healthtech developers to spot these problems as they construct their algorithms. Platform_Validate can put these models through the wringer with a third-party perspective that can confirm efficacy in regard to the AI’s intended clinical purpose. 

The journal Health Affairs released an analysis earlier this year that delved deep into the inherent bias present in certain machine learning tools, finding that many predictive solutions used by health insurers to assess medication adherence has a bias leading to lower prescription rates for racial/ethnic minorities. Said biases, whether they pertain to race or not, can result in incorrect diagnoses and other very serious risks for patients.

Platform_Validate’s bias evaluation examines models categorically in areas such as age, ethnicity, and more, offering important information regarding AI use in a user-friendly format akin to nutrition and ingredient labels on food products. The platform can also predict how different AI algorithms might perform in varying scenarios, looking at their response to different demographics including gender, race, and socioeconomic status. It completes its assessments by tapping into its library of real-world records from more than 10 million patients across various geographies and therapeutic areas.