Stanford’s new AI can recognise the warning signs of skin cancer as effectively as human dermatologists
Much is said about the future of AI, but quite often the here and now is a touch underwhelming – or it’s so well hidden you don’t see its influence. Here’s an example where it could potentially save millions of lives, thanks to clever use of deep learning from Stanford University.
In the UK, there are at least 100,000 diagnoses of skin cancer per year, of which 2,500 people will die. Skin cancer is actually very treatable, but only if it’s caught early and the signs are spotted by the eye of a trained dermatologist with a dermatoscope (a handheld microscope that offers low-level magnification). The five-year survival rate, if spotted at the earliest state, is around 97% – but that drops to 14% if detected late. Recognising that people often struggle to see the doctor for various reasons (from getting time off work to dismissing complaints as nothing), the team set about teaching an AI to recognise the early signs. And they managed it with an astonishing 91% accuracy.
“My main eureka moment was when I realised just how ubiquitous smartphones will be,” said Andre Esteva, co-lead author of the paper that was published yesterday in Nature. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?”
As unlikely as it will sound, the starting point for the research came from an algorithm trained to spot the difference between cats and dogs, developed by Google. The team collected tens of thousands of images of various skin diseases from the internet. This, as you can imagine, was not an easy task. “There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” explained Brett Kuprel, another co-lead author of the paper. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin.”
Working with the dermatologists at Stanford Medicine, the team sorted through the images of varying quality, eventually coming up with 129,450 images of skin lesions from 2,032 diseases. They fed these to the algorithm in order that it could learn the difference between a benign seborrheic keratosis and a malignant carcinoma.
After the AI was trained, it was time to test its performance against trained dermatologists. The testing phase involved high-quality, biopsy-confirmed images from the University of Edinburgh and the International Skin Imaging Collaboration project. Twenty-one dermatologists were presented with 370 images and asked whether or not they would refer a biopsy or reassure the patient. In every test, the AI would match or exceed the performance of the dermatologist. And as it’s an algorithm, the sensitivity can be tweaked to make its diagnoses more cautious.
Subject to further testing, this could make a huge difference – especially with smartphone ownership spreading fast around the world. Right now it only works on desktop computers, but the team believe that the technology would be relatively simple to port over to mobile.
You can read the full paper in Nature.
Image: Kellinahandbasket, used under Creative Commons