Community monitoring of nAMD as effective as hospital monitoring

Article

Community monitoring of neovascular age-related macular degeneration (nAMD) can be just as effective as hospital-based monitoring, researchers say.

By Laird Harrison

Community monitoring of neovascular age-related macular degeneration (nAMD) can be just as effective as hospital-based monitoring, researchers say.

“Routine monitoring of nAMD can effectively take place in the community, which not only frees up hospital appointments, but is often more convenient for patients,” said Usha Chakravarthy of Queen’s University Belfast, UK, in a press release.

Professor Chakravarthy and colleagues published their findings in BMJOpen

After treatment, most patients in the UK are asked to visit hospital outpatient departments monthly for monitoring.

But this may unnecessarily burden hospital eye services, the researchers write. It blocks clinic space, uses valuable resources, is expensive and is also burdensome to the patients and those who care for them. Some hospitals are struggling to provide clinic appointments at recommended intervals.

With funding by the National Institute for Health Research (NIHR), Professor Chakravarthy and colleagues recruited ophthalmologists with experience in treating nAMD and optometrists not participating in nAMD care.

To test whether optometrists could screen for reactivated nAMD, a randomised controlled trial might not work because patients might not trust optometrists to do this work, the researchers speculated. Also these trials are expensive and take a long time.

To see how well community optometrists could review quiescent nAMD, the researchers provided a training webinar to 72 ophthalmologists and 83 optometrists.

The trial was unusual in that it was based online, with eye doctors and optometrists making decisions based on vignettes rather than real patients.

The researchers created 288 vignettes representing patients with quiescent nAMD being monitored for reactivation. Each vignette consisted of sets of colour fundus and OCT images from the study eye at two time points: ‘baseline’ from when the lesions were quiescent and ‘index’ from another clinical visit. The vignettes included clinical information such as visual acuity.

Three retina specialists evaluated the vignettes and established a reference standard for each index image. In the 28% of cases where they did not agree, they met to reach a consensus.

After participants took part in the webinars, the researchers asked them to assess 24 training vignettes each. If the participants could not correctly classify the lesion status for 18 of these vignettes, they were given a different set of 24 vignettes. If they could not correctly classify the lesion status of 18 of these vignettes, they were excluded from the trial.

Study outcomes

 

Of the 56 ophthalmologists who completed the training, 48 passed the training test on the first attempt and two on the second attempt, but two subsequently withdrew, leaving 48.

Of the 61 optometrists who completed the training, 38 passed the training test on the first attempt and 11 passed on the second attempt, but one withdrew, also leaving 48.

Those participants who passed the test in the training phase were asked to assess 42 different lesions each. Pairs of participants from each profession were assigned the same vignettes for each lesion.

Ophthalmologists correctly classified a mean of 37 out of these 42 lesions each (88.1%), while optometrists correctly classified a mean of 36 of 42 lesions (85.7%) each.

The total number of assessments was 2016 for both ophthalmologists and optometrists. Out of these, optometrists correctly classified 1702 lesions (84.4%), while ophthalmologists correctly classified 1722 (85.4%). The difference was not significantly different (OR 0.91, 95% CI 0.66 to 1.25; p=0.543).

Optometrists made sight-threatening errors 5.7% of the time, while ophthalmologists made such errors 6.2% of the time, a difference that was also not statistically significant (OR 0.93, 95% CI 0.55 to 1.57; p=0.789).

Based on this finding, the researchers determined that the decisions made by ophthalmologists and optometrists were consistent and that after training, optometrists based in the community were as good as hospital-based ophthalmologists.

The optometrists were less confident in their judgment than the ophthalmologists. They rated their confidence as 5 on a 5-point scale 28.5% of the time, compared with 58.3% of the time for the ophthalmologists.

Perhaps for this reason, the optometrists were more likely than the ophthalmologists to correctly classify a vignette as reactivated. Optometrists correctly classified reactivated lesions 80.0% of the time versus 74.0% for ophthalmologists. On the other hand, optometrists were less likely to correctly classify a vignette as quiescent or suspicious (88.7%) than ophthalmologists (96.5%).

This meant that optometrists were about 50% more likely than ophthalmologists to correctly classify a lesion if it was reactivated. This difference was statistically significant (OR 0.27, 95% CI 0.17 to 0.44; p<0.001). But they were 70% more likely to incorrectly classify a lesion that was quiescent or suspicious (OR 1.52, 95% CI 1.08 to 2.15 p=0.018).

Nevertheless, overall they classified vignettes as accurately as the eye doctors.

The researchers acknowledged that making decisions based on vignettes is different from making them in the presence of patients, and this could be a limitation of their study. But they noted that clinicians in many hospitals make clinical decisions when patients are not present.

They said they will present information on cost effectiveness in a future publication.

Related Videos
Josefina Botta, MD, MSc, at ASCRS 2024
Dr Nir Shoham Hazon, Director, Miramichi EyeNB Centre of Excellence, New Brunswick, Canada
J. Morgan Micheletti, MD, speaks at the 2024 ASCRS meeting
Dr William Wiley of Cleveland Eye Clinic, Northeast Ohio
© 2024 MJH Life Sciences

All rights reserved.