This AI can spot Alzheimer’s disease six years before doctors
An AI trained by Californian-based researchers is now capable of detecting the early signs of Alzheimer’s Disease. The algorithm is able to pick up on early signs on an average of six years before human physicians are able to issue a diagnosis.
The research team demonstrated that the neural network, once trained, was able to scan images of patient’s brains and detect the presence of Alzheimer’s on average 75.8 months prior to actual diagnosis.
The 20-strong team based their research on a modern diagnosis method, dubbed F-FDG PET (or fluorine 18 (18F) fluorodeoxyglucose positron emission tomography), in which a radioactive glucose dye is passed through the brain, and photographed. Specialists then examine and interpret these images using the naked eye for signs of Alzheimer’s, a precursor known as mild cognitive impairment (MCI), or other related conditions across the spectrum.
Despite seeming time-consuming, this method has led to quicker and earlier diagnoses, and more effective treatments.
Given this method is reliant on pattern recognition, researchers saw it as an opportunity to vastly improve its performance by deploying a self-training AI algorithm, publishing their findings in Radiology.
“There is wide recognition that deep learning may assist in addressing the increasing complexity and volume of imaging data, as well as the varying expertise of trained imaging physicians,” the team wrote. “The application of machine learning technology to complex patterns of findings, such as those found at functional PET imaging of the brain, is only beginning to be explored.
“We hypothesized that the deep learning algorithm could detect features or patterns that are not evident on standard clinical review of images and thereby improve the final diagnostic classification of individuals.”
They set out to evaluate whether a deep learning algorithm could be trained to predict the final clinical diagnosis in patients who had undergone F-FDG PET, and how its success compared with current clinical standards.
From their study of 2,109 images from 1,002 patients who had already been diagnosed, they found their algorithm was able to detect Alzheimer’s in images taken on average more than six years before diagnosis. The algorithm performed better at recognising patients who would go on to have Alzheimer’s than clinicians, as well as patients who would go on to develop neither Alzheimer’s or its precursor MCI.
READ NEXT: MIT researchers teach AI to spot depression
These findings are the latest in a series of studies and trials which show the potential power for AI to transform preventative healthcare and diagnosis.
In September the Francis Crick Institute revealed an AI learnt how to model and predict heart disease mortality rates in patients with a greater level of accuracy than trained doctors, or models created by experts.
Google‘s DeepMind AI project, meanwhile, reached an important milestone in the summer as its AI system was able to examine 3D images of the eye and diagnose sight-threatening conditions, as well as offer treatment advice, within seconds.
The algorithm, tested in conjunction with London-based Moorfields Eye Hospital, was able to recommend the best path of treatment for more than 50 eye diseases with 94% accuracy.
Despite bemoaning a handful of limiting factors, including a small sample size, the Californian researchers concluded that had developed a deep learning algorithm that can predict Alzheimer’s “with high accuracy and robustness”.
They added that with access to a much larger volume of data and opportunities to calibrate the model, the algorithm they developed could be integrated directly into the workflow of clinicians and serve as an essential support tool.