Code-free deep learning: The next phase of AI-enabled healthcare?


Code-free deep learning models are expected to find applications across a range of areas to improve patient care, increase accessibility to healthcare, accelerate clinical research and enhance medical education.

Code-free deep learning: The next phase of AI-enabled healthcare?

Over the past decade, it has become increasingly clear that artificial intelligence (AI) is poised to transform the clinical landscape. This is particularly true for deep learning, a subtype of AI that is modelled on the neural networks of the human brain. Ground-breaking performances in image classification, natural language processing and object detection have positioned deep learning as one of the primary trends of our century.

AI-enabled healthcare

The sheer volume of clinical data makes deep learning particularly suited for the healthcare industry. This has been demonstrated by the number of clinical AI studies published in the literature, with applications in ophthalmology, dermatology, radiology, histopathology and others.1–4 However, to date, the volume of clinical AI publications has not been reflected in real-world usage of these systems.

Let us consider the resources needed to develop a bespoke deep learning model for a clinical pathway. Firstly, these systems must be designed by highly skilled technical experts.

This level of specialised experience in deep learning model design and development is limited worldwide and therefore in huge demand. Large technology firms have the resources to offer significant financial remuneration, making it exceedingly difficult for universities or hospitals to compete.

Secondly, huge computing resources are required, which are both difficult to obtain and expensive to run. Conversely, the universities or hospitals lacking in these resources are generally the exact locations where both the domain experts and clinical data needed to develop the models are found.

Even if we devised a hypothetical situation in which the research setting is provided with unlimited financial resources, extremely close collaboration is required between the domain experts and data scientists in order to develop a robust, technically effective and clinically useful deep learning model for use in healthcare. This is inefficient and often unfeasible.

Enter code-free deep learning

We believe that part of the solution lies in the emerging technique known as code-free deep learning (CFDL). First spotlighted in 2017 by Google CEO Sundar Pichai,5 with Google Cloud AutoML Vision, it has since become available on a number of other commercial platforms including Amazon Rekognition Custom Labels (Amazon); Apple Create ML (Apple); Baidu EasyDL (Baidu); Clarifai Train (Clarifai); Huawei ModelArts ExeML (Huawei); MedicMind Deep Learning Training Platform (MedicMind); and Microsoft Azure Custom Vision (Microsoft).

Featuring intuitive user interfaces and simple navigation tools, these models have opened the door to deep learning. Furthermore, most of these services are cloud-based, removing the need for huge computing resources.

One area that has not been fully addressed by CFDL is the curation of well-labelled datasets. This is a crucial part of the process and has a major impact on the resulting quality and performance of the deep learning model. Data can be collected from hospital databases; however, this is time consuming and requires the creation of a strict labelling protocol in advance, and navigation of ethical and data governance barriers.

There are several public datasets available for use, which serve as a valuable tool for CFDL users in specific cases. However, these datasets should be used with caution as they are not always representative of the population the model is intended for. Some of the CFDL platforms have begun to address this pain point with the development of labelling services, such as Amazon Automate Data Labeling, Clarifai Scribe Label and Google Cloud AutoML Vision Human Labeling.

Model set up and training

Figure 1: The development of an image classifier model on a CFDL platform. (Figure courtesy of Dr Ciara O’Byrne and Prof. Pearse A. Keane)

Figure 1: The development of an image classifier model on a CFDL platform. (Figure courtesy of Dr Ciara O’Byrne and Prof. Pearse A. Keane)

The creation of an image classifier model follows the same basic principles for each CFDL platform and is outlined in Figure 1. Following upload of the dataset, either directly from the computer or via a cloud bucket, the user can review the images and label statistics; at this point, there is the option to amend any labels if necessary. The model is then ready to be trained.

Once the model has completed the training process, detailed statistics relating to its performance can be examined. The metrics vary between platforms but typically include precision, recall, receiver operating characteristic curves and confusion matrices.

The final step in the process is to test and deploy the model. External validation is a critical part of the process if the model is to be considered for real-world implementation. At present, this is not facilitated by all CFDL platforms.

Overall, the entire process is intuitive, with the majority of platforms also offering informative documentation and videos. What is particularly exciting is the realm of opportunities CFDL offers to domain experts residing in hospital or university settings. These domain experts are best suited to design deep learning models for their area of specialisation as they can uniquely and effectively tailor the model to the specific needs of the patient.


An important consideration when using these models in healthcare is whether they achieve similar levels of accuracy to their counterpart bespoke models. According to the clinical literature, this is mostly the case, and we have reviewed this in detail.6

Faes et al. demonstrated in 2019 that two clinicians with no coding expertise could create deep learning systems using Google Cloud AutoML Vision with comparable performance to bespoke models in all but one example.7 Kim et al. demonstrated that Google Cloud AutoML Vision could classify pachychoroid disease using ultra-wide indocyanine green angiography images.8

A particularly exciting progression has been in the prediction of sex from retinal fundus photographs. In 2018, a study by Poplin et al. reported that a bespoke deep learning system could identify sex from retinal fundus photographs, despite the fact that this had never before been clinically documented.9

In 2021, a similar model was created using CFDL by clinicians without coding expertise and demonstrated comparable results to the bespoke model designed and developed by Poplin et al.10 The significance of this is twofold. It demonstrates:

  1. CFDL can achieve comparable results to state-of-the-art deep learning models; and
  2. It has the potential to discover intricate patterns in clinical data unknown to humans and never previously documented.

Although it must be acknowledged that the recognition of sex from retinal fundus photographs is not clinically useful, it is an important demonstration of the potential that deep learning, and specifically CFDL, may play in the future of clinical research.

Financial and real-world considerations

The aforementioned barriers hospitals and universities face when it comes to bespoke deep learning may also further widen the disparity gap between well- versus under-resourced communities and healthcare systems. It is possible that with advancing technologies, governments in wealthier countries may invest greater financial resources in the development and promotion of clinical AI models within their respective healthcare systems.

Figure 2: CFDL may democratise healthcare in under-resourced settings. (Images courtesy of Dr Ciara O’Byrne and Prof. Pearse A. Keane)

Figure 2: CFDL may democratise healthcare in under-resourced settings. (Images courtesy of Dr Ciara O’Byrne and Prof. Pearse A. Keane)

CFDL may feature in these systems as a tool to develop proof-of-concept models prior to investment in an advanced bespoke deep learning model. However, for certain developing nations that are already struggling with strained resources for basic healthcare provision, this might not be an option. CFDL has a huge role to play in the democratisation and increased accessibility of AI-enabled healthcare and healthcare, more broadly, to these areas (see Figure 2).

CFDL models can be downloaded and run on local edge models. These are lower power models that do not require a continuous Internet connection to run, and thus do not depend on advanced Internet infrastructure. In combination with telemedicine and wearable sensors, fewer doctors in these communities could reach a greater number of patients. The continuing advances in mobile technology, such as 5G, will also aid in supporting this.

The flip side is that often under-resourced or disadvantaged communities may be at more risk of suboptimal ethical regulations or data governance. Prior to widespread implementation of these systems, regulations must be established globally.

Education is another central factor in the future of AI-enabled healthcare, and CFDL will serve this in a number of ways. Students and clinicians who are enabled to create their own models will develop a deeper understanding of the principles, processes, applications and limitations of deep learning in healthcare.

CFDL may also serve as a tool to enhance traditional medical education such as disease pattern recognition, monitoring of performance and progression and improvement of surgical skills.

The future of AI-enabled healthcare will centre on CFDL and the multitude of models developed by domain experts across all fields. It is likely that these ‘citizen’ data scientists will lead the AI revolution of the future, applying CFDL models across a range of areas and leading to a democracy in the world of AI and in healthcare.

take-home message: code-free deep learning, next phase of AI-enabled healthcare


Dr Ciara O’Byrne, MB, BCh, BAO
Dr O’Byrne is a clinical research fellow at Moorfields Eye Hospital, London, where her focus is on applications of automated deep learning in healthcare with a view to increasing the democratisation of artificial medical intelligence.
Prof. Pearse A. Keane, MD, MSc, FRCOphth, MRCSI
Prof. Keane is a consultant ophthalmologist at Moorfields Eye Hospital, London, and an associate professor at UCL Institute of Ophthalmology. He has acted as a consultant for DeepMind, Roche, Novartis, Apellis and BitFount and is an equity owner in Big Picture Medical. He has received speaker fees from Heidelberg Engineering, Topcon, Allergan and Bayer.


1. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24:1342-1350.
2. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115-118.
3. Litjens G, Sánchez CI, Timofeeva N, et al. Deep learning as a tool for increased accuracy and efficiency of histopathological diagnosis. Sci Rep. 2016;6:26286.
4. Rajpurkar P, Irvin J, Zhu K, et al. CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv [csCV]. 14 November 2017.
5. Pichai S. Making AI work for everyone. Google Blog. 17 May 2017.
6. O’Byrne C, Abbas A, Korot E, Keane PA. Automated deep learning in ophthalmology: AI that can build AI. Curr Opin Ophthalmol. 6 July 2021. DOI:10.1097/ICU.0000000000000779.
7. Faes L, Wagner SK, Fu DJ, et al. Automated deep learning design for medical image classification by health-care professionals with no coding experience: a feasibility study. Lancet Digit Health. 2019;1:e232-e242.
8. Kim IK, Lee K, Park JH, et al. Classification of pachychoroid disease on ultrawide-field indocyanine green angiography using auto-machine learning platform. Br J Ophthalmol. 3 July 2020. DOI:10.1136/bjophthalmol-2020-316108.
9. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. Nat Biomed Eng. 2018;2:158-164.
10. Korot E, Pontikos N, Liu X, et al. Predicting sex from retinal fundus photographs using automated deep learning. Sci Rep. 2021;11:10286.

Related Content: Gene Therapy | Retina | Cataract & Refractive

Related Videos
ARVO 2024: Andrew D. Pucker, OD, PhD on measuring meibomian gland morphology with increased accuracy
 Allen Ho, MD, presented a paper on the 12 month results of a mutation agnostic optogenetic programme for patients with severe vision loss from retinitis pigmentosa
Noel Brennan, MScOptom, PhD, a clinical research fellow at Johnson and Johnson
ARVO 2024: President-elect SriniVas Sadda, MD, speaks with David Hutton of Ophthalmology Times
Elias Kahan, MD, a clinical research fellow and incoming PGY1 resident at NYU
Neda Gioia, OD, sat down to discuss a poster from this year's ARVO meeting held in Seattle, Washington
Eric Donnenfeld, MD, a corneal, cataract and refractive surgeon at Ophthalmic Consultants of Connecticut, discusses his ARVO presentation with Ophthalmology Times
John D Sheppard, MD, MSc, FACs, speaks with David Hutton of Ophthalmology Times
Paul Kayne, PhD, on assessing melanocortin receptors in the ocular space
Osamah Saeedi, MD, MS, at ARVO 2024
© 2024 MJH Life Sciences

All rights reserved.