A machine learning model that is better than the median board-certified ophthalmologist in assessing signs of diabetic retinopathy

The Google Brain team (http://g.co/brain) has been focusing some of our efforts on how machine learning can transform healthcare. We're very excited about the opportunities to provide better and more accessible care, and to save lives and make people healthier. Some of my colleagues have been working on automated systems for assessing retinal images for signs of diabetic retinopathy (DR), a degenerative eye disease that if not caught can cause blindness, and is a great example of where machine learning can really help healthcare providers. They have written a paper that was published today in the Journal of the American Medical Association, titled "Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs".

The automated system achieves an accuracy in assessing retinal images for signs of diabetic retinopathy that is higher than the median accuracy of eight board-certified ophthalmologists. This is an example of the transformative potential of machine learning for healthcare, because in many parts of the world, there simply aren't enough ophthalmologists to screen everyone for DR (the actual cameras to take the retinal images are not that expensive and so the real bottleneck is the time of skilled ophthalmologists to interpret the images).

Research blog post about this work:

The full JAMA article:

A more general overview of our healthcare work: http://g.co/brain/healthcare

Edit: revised language to clarify that the model is better than the median ophthalmologist rather than more accurate than the consensus (which is the ground truth).
Shared publiclyView activity