A recent Cornell University-led study suggested that if AI tools can counsel a physician like a colleague – pointing out relevant research that supports the decision – then doctors will be at an advantage to better weigh the merits of the recommendation.
AI researchers have helped doctors evaluate suggestions from AI powered tools by explaining to them how the algorithm works or what data was used to train the AI. However, doctors were more eager to find out if the tool had been validated in clinical trials, which was not what happened with these tools.
Assistant professor of information science, Qian Yang, said, “If we can build systems that help validate AI suggestions based on clinical trial results and journal articles, which are trustworthy information for doctors, then we can help them understand whether the AI is likely to be right or wrong for each specific case”.
The team built a prototype of their clinical decision tool that presents biomedical evidence alongside the AI’s recommendation. The system attempts to recreate the communication that was noticed when doctors delivered suggestions to each other. The system will provide similar evidence from clinical literature to support the AI’s suggestion.
The interface for the decision support tool lists patient information, medical history and lab test results, as well as the AI’s diagnosis and treatment suggestion. This is followed by relevant biomedical research and a short summary for each study. The clinical evidence was appreciated as doctors can easily follow it, preferring it to an explanation of the AI’s inner workings.
Source: Cornell University
Image Credit: iStock