Artificial intelligence is playing an ever larger role in modern clinical care. For example, there are already imaging processing systems that are able to automatically spot polyps during colonoscopies and help analyze head CT scans for hemorrhage and mass effect. A universal issue with such systems is that they do not take into account how experienced, busy, or in need of actual help a given clinician is. A team of MIT researchers has now developed a machine learning system that can adjust how it makes clinically relevant decisions, whether to let an expert decide something or to do so itself, and to do so while taking into account how busy or experienced the clinician using it is.
Specifically, the new system reviews chest X-rays for conditions such as atelectasis (lung collapse) and cardiomegaly (an enlarged heart), and makes a decision to provide its diagnosis based on who is looking at the results. The self-adjusting system was given a bunch of virtual experts to work with and, when reviewing potential cases of cardiomegaly, it showed an 8 percent improvement over using only experts or only the system’s own recommendations.
“In medical environments where doctors don’t have many extra cycles, it’s not the best use of their time to have them look at every single data point from a given patient’s file,” said Hussein Mozannar, lead author of a study that was recently presented at the International Conference of Machine Learning. “In that sort of scenario, it’s important for the system to be especially sensitive to their time and only ask for their help when absolutely necessary.”
The system has a classifier component that “can predict a certain subset of tasks,” according to at MIT press release, and a rejector component that makes a decision on whether to use the classifier or divert the decision to the human.
“Our algorithms allow you to optimize for whatever choice you want, whether that’s the specific prediction accuracy or the cost of the expert’s time and effort,” added David Sontag another of the paper’s lead authors. “Moreover, by interpreting the learned rejector, the system provides insights into how experts make decisions, and in which settings AI may be more appropriate, or vice-versa. There are many obstacles that understandably prohibit full automation in clinical settings, including issues of trust and accountability. We hope that our method will inspire machine learning practitioners to get more creative in integrating real-time human expertise into their algorithms.”
Preprint of the study in arXiv: Consistent Estimators for Learning to Defer to an Expert