Google DeepMind searches for signs of retinopathy

Article

Google is using its DeepMind computing system to search for early signs of diabetic retinopathy and neovascular age-related macular degeneration (nAMD), the company announced this month.

Google is using its DeepMind computing system to search for early signs of diabetic retinopathy and neovascular age-related macular degeneration (nAMD), the company announced this month.

The company will analyse a database of a million fundus and optical coherence tomography (OCT) images supplied by the UK’s Moorfields Eye Hospital National Health Service (NHS) Foundation Trust to search for patterns that correlate to the diseases.

“Our research with DeepMind has the potential to revolutionise the way professionals carry out eye tests and could lead to earlier detection and treatment of common eye diseases such as age-related macular degeneration,” said Sir Peng Tee Khaw, of the trust, in the July 5 announcement.

Jeffrey de Fauw of Google, along with researchers from Moorfields and other London institutions have simultaneously published details of their protocol in F1000 Research.

Eye scans are complex and required trained professionals a long time to complete, according to the announcement on DeepMind’s website. “As a result, there are often significant delays in how quickly patients can be seen to discuss their diagnosis and treatment.” Classic computers have not been able to solve this problem.

The researchers hope they can speed up the task by harnessing recent advances in machine learning, a process by which algorithms are able to learn how to accomplish tasks without instruction.

Already such algorithms have provided insight into genetic interactions in autism and monitoring of physiological observations in intensive care, the researchers say.

The researchers point out that AMD is the leading cause of blindness in Europe and North America and is responsible for more than half of the partially sighted or legally blind certifications in the United Kingdom. Early detection and treatment can make a significant difference in the course of the disease.

Similarly diabetic retinopathy, they write, is the leading cause of blindness in working-age people throughout the developed world. According to one estimate, 50% of people with proliferative diabetic retinopathy will become legally blind within 5 years if they do not receive timely treatment, the researchers write. And 98% of severe visual loss from the disease can be prevented if the disease is caught early. 

Related: Researchers: No need to screen nonagenarians for diabetic retinopathy

DeepMind began as an independent project started by Demis Hassabis, Shane Legg and Mustafa Suleyman in London in 2010 to use neural networks, machine learning and other techniques in pursuit of artificial intelligence.

Google acquired the fledgling enterprise in 2014 for a reported $500 million. Soon afterward it caught worldwide headlines by defeating champion player Lee Sedol at the game of Go.

A similar project in which Google DeepMind announced it would crunch data on patients from the Royal Free Hospital London to help detect kidney disease stimulated controversy. Critics worried that Google would acquire sensitive information about these patients such as HIV status.

Data analysis

 

This time around, Google describes in detail how Moorfields staff stripped the data of information that could be used to identify patients.

This study will combine traditional statistical methodology and machine learning algorithms to achieve “automatic grading and quantitative analysis,” the researchers report.

The study will include all patients who attended Moorfields Eye Hospital NHS Foundation Trust sites between January 1, 2007, and February 29, 2016, who had digital retinal imaging (including fundus digital photographs and OCT) as part of their routine clinical care, except patients whose records are in hard copy and patients who have requested that their data not be shared.

Deep neural networks of the type used by DeepMind may have millions or billions of parameters, “so large amounts of data are needed to automatically infer those parameters during learning,” De Fauw and his colleagues wrote.

In addition to the images, DeepMind will analyse demographic information, primary and secondary diagnostic labels describing the pathology in the image and associated severity (such as grade of retinopathy), treatment information and a model of the imaging device.

DeepMind will also analyse a second dataset in which pseudonyms are attached to the patients so that disease progression can be tracked over time.

Trained graders at Moorfields will attach additional manual labels annotating pathological and anatomical features to a selection of the images. The researchers will compare the machine’s performance in identifying abnormalities to humans’ performance.

No patients will be approached as part of the study.

The research project agreement is for 5 years.

In a separate eye-related project, Google is also developing contact lenses that monitor blood sugar levels in people with diabetes.

 

 

 

Related Videos
A screenshot of Dr Filomena Ribeiro, president of the ESCRS
Ramin Tadayoni, MD, speaks with Sheryl Stevenson
Jennifer I. Lim, MD, FARVO, FASRS, Director of Retina Service, University of Illinois at Chicago
Anat Loewenstein, MD, Professor and Director, Department of Ophthalmology, Tel Aviv Medical Center
Carl D. Regillo, MD, FACS, FASRS, Chief of Retina Service, Wills Eye Hospital, Philadelphia, PA
Arshad Khanani, MD, MA FASRS, on a virtual call
Penny A Asbell, MD, FACS speaks at the 2023 AAO meeting
© 2024 MJH Life Sciences

All rights reserved.