Scientists at Stanford University have developed a deep convolutional neural network that can diagnose skin cancer by examining images of skin lesions. Skin cancer is the most common human cancer, and one in five Americans will be diagnosed with it at some point in their lives. At present, skin cancer is primarily diagnosed through an initial visual assessment by a dermatologist, with additional biopsies and histopathological assessments if a cancerous lesion is suspected.
During development of their technology, the researchers trained their artificial intelligence system using a dataset of almost 130,000 images of different skin cancers. Impressively, the artificial intelligence system performed as well as a panel of 21 experienced dermatologists in diagnosing some common types of skin cancer, along with melanoma, a particularly deadly form of skin cancer. In the future, this system could be adapted for use on mobile devices, potentially permitting for point of care assessment of skin lesions. The technology could also support dermatologists in making clinical assessments.
Conn Hastings, Medgadget: Can you explain how the deep convolutional neural network works, in simple terms?
Andre Esteva: Neural networks are computer algorithms inspired, on a high level, by a rudimentary understanding of how neurons in the human brain work. They are composed of layers of simple computational units, which we call neurons, that perform very basic mathematical functions. When a neural network has multiple layers, we call it ‘deep’, and if the data processing from layer to layer involves performing convolutions on the data, we call it a convolutional neural network.
Medgadget: Is misdiagnosis of skin cancer a significant issue at present? What clinical impact do you envisage the technology having? Could it eventually outperform/replace a dermatologist in this context?
Andre Esteva: Our work is in no way intended to replace a dermatologist. Rather, it is intended to enhance them by potentially extending their reach outside of the clinic. Aided by our technology, one dermatologist could provide care for many more patients by using such an algorithm. The misdiagnosis of any cancer is always an issue. Today in America over 5 million skin cancers are diagnosed annually.
Medgadget: Congratulations on the performance of the technology against a panel of dermatologists. Were they surprised that the system could match them in terms of accurate diagnoses?
Andre Esteva: Convolutional neural networks have been incrementally improving performance to match humans across a number of tasks, including the ImageNet challenge (a major computer vision competition), playing Atari games, playing Go, and even playing poker. However, all of these tasks either already have massive amounts of data available, or they have the ability to generate massive amounts of data – for instance, the computer can play Go against itself arbitrarily many times. Even though we have (to the best of our knowledge) the largest skin dataset available, it is still orders of magnitude smaller than these other tasks. To perform at the level of a board-certified dermatologist was indeed surprising.
Medgadget: Are there any challenges in adapting the system for use with mobile devices? Do you envisage that it could be incorporated in an app that works in conjunction with a conventional smartphone camera?
Andre Esteva: There are always new challenges when adapting a technology to a new system (such as embedding this algorithm in a mobile device). Given the ubiquity of mobile devices, they could certainly be used to run such algorithms. However, we must stress that these algorithms are not yet ready for mobile device deployment at scale. To achieve that would require a rigorous clinical trial in which we could verify that the performance seen in the paper, where the algorithm is tested on computer images, is also seen in a functioning app, which takes images on actual patients.
Medgadget: Can the technology be applied to other clinical situations that require a visual assessment?
Andre Esteva: There are many medical imaging modalities that could benefit from deep neural networks – our paper highlights several, including radiology and pathology. This would not require an expansion of our current system, but the training of a new one.
Medgadget: Could the system potentially be used to pro-actively monitor a suspect lesion and assist in indicating when it becomes cancerous, rather than just detecting pre-existing cancer?
Andre Esteva: That is one potential application, and we welcome the medical community’s feedback on the best technology system to use our algorithms.