A recently developed tool can help ophthalmologists to assess the quality of findings from real-world studies in retinal diseases. This will help them decide which results are most robust and applicable to their practice.
By Prof. Robert P. Finger, Prof. Vincent Daien, Mr James S. Talks, Prof. Taiji Sakamoto, Prof. Bora M. Eldem, Dr Monica Lövestam-Adrian and Prof. Jean-François Korobelnik
It is well accepted that randomised controlled trials (RCTs) are the gold standard in establishing the efficacy of a clinical intervention. However, since such studies often have strict inclusion criteria and stringent requirements for patient follow-up, they often have greater patient adherence than that seen in the clinic and can exclude some of the types of patients routinely seen in day-to-day practice.1,2
In addition, RCTs can be extremely expensive to conduct and study enrolment can be relatively slow compared with non-randomised studies.3 This is where data collected from routine clinical practice comes in. Such data enable the generation of real-world evidence (RWE), which can complement information from RCTs by providing insights regarding the safety and effectiveness of an intervention in broader patient populations under routine care conditions.1
This evidence can be derived from real-world data from various sources, including electronic health or medical records (also referred to as EHR or EMR), patient registries, medical claims databases, retrospective or prospective observational studies or clinical audits (Table 1).1,4-11 RWE is increasingly being used to support drug approvals12 and in health technology assessments to support reimbursement.13
In that respect, it is already guiding clinical practice. With a better understanding as to how to assess the quality of real-world studies, clinicians can use this evidence appropriately to inform their own clinical decision-making at both practice and patient level.
In ophthalmology, there is an increasing global evidence base that describes the use of intravitreal anti-vascular endothelial growth factors (anti-VEGF) in routine clinical practice.5,6,9,10 Such studies have shown that, in routine practice compared with RCTs, there is both a lack of adherence to, and undertreatment with, anti-VEGF therapy, including non-persistence, whereby patients do not continue therapy in the medium-to-long term.2
This remains a significant barrier to optimising real-world outcomes for patients with chronic, progressive retinal conditions such as neovascular age-related macular degeneration (nAMD). Certainly, more RWE for effective strategies that can be employed at a clinic/ practice level to improve adherence and persistence to anti-VEGF agents would be welcomed.
In addition, there are inherent limitations to data collection in real-world clinical practice; as a result, RWE may vary in quality.14 This is partly because of heterogeneity in data gained from different sources and variability in both the application of RWE methodologies and the interpretation of the resulting evidence. Furthermore, depending on the RWE source, data may be inconsistently collected, misclassified or missed — this is known as information bias.
A recent systematic review analysed 64 real-world ophthalmology data sources from 16 countries for completeness of data relating to different outcomes and only ten scored highly.15 Most of these sources provided information on baseline status, clinical outcomes and treatment, but few collected data on economic and patient-reported burden.
Another issue within ophthalmology is how the treatment regimens, particularly for anti-VEGF agents, are described. Intravitreal aflibercept, brolucizumab, pegaptanib and ranibizumab have indications for retinal diseases, but in some European countries, bevacizumab is used and reimbursed off-label, despite jurisdictional ambiguity and, often, a legal risk for prescribing physicians.16
Sometimes, there is a lack of clarity as to whether patients were treated with fixed dosing, pro re nata therapy or treat-and-extend dosing and the degree of adherence to those schedules.15 This makes understanding treatment effectiveness and comparing studies particularly challenging.
Although information bias may be one of the easiest types to identify and quantify, RWE is also prone to other biases. Therapies may be differently prescribed depending on patient and disease characteristics, leading to selection and channelling biases (for example, older patients or those with more advanced disease tending to receive one therapy over another).
In addition, patients or caregivers may be more likely to report only the most recent or impactful events, leading to recall bias; and, in some cases, events are more likely to be captured in one treatment group than another, resulting in detection biases.17 It is important that such limitations are recognised and RWE is interpreted in this context.
Throughout medical school and early in their medical careers, clinicians are taught how to assess the robustness of clinical studies, learning about randomisation, methods of blinding and different types of controls.18 It is similarly important to understand the quality of RWE.
RWE is becoming an increasingly important component of the overall evidence-based treatment of retinal diseases. However, clear guidance on how to assess the rigor of real-world studies and the conclusions and recommendations they generate in the field of ophthalmology is lacking.
As shown in Figure 1, an RWE steering committee—a coalition of leading retinal specialists and methodological experts—recently developed a user-friendly framework assessing the quality of available RWE for retinal diseases, including nAMD, diabetic macular oedema and retinal vascular occlusion.14 The goal was to assist ophthalmologists in independently drawing relevant and reliable conclusions from RWE and understanding its applicability to their practice.
The Good Research for Comparative Effectiveness (GRACE) checklist was selected as the basis of the RWE quality assessment tool. This is an 11-item screening tool that evaluates methodology and reporting to identify high-quality observational comparative effectiveness research.19 The checklist has been extensively validated, has demonstrated strong sensitivity and specificity and can be successfully applied by a wide variety of users with different training backgrounds.
Although the checklist was developed specifically for comparative effectiveness research, many of the items were also considered applicable to non-comparative studies. It was adapted for the retinal diseases field by omitting items that were not considered relevant to a practical, clinically focused ophthalmology audience, and adapting or combining some items for easier application.
The adaptation of the GRACE checklist to be more specific to, and relevant for, ophthalmologists has resulted in the development of the retinal diseases RWE quality assessment tool. Table 2 summarises the retinal disease-specific aspects of the checklist.
This addresses treatment details, how outcomes were assessed/quantified, descriptors of the study population and possible sources of bias. More details and examples of what to consider can be found in the full publication.14
The purpose of the tool is to specifically assess the quality of RWE generated by a specific study in a population of patients with retinal disease. Caution should be exercised when comparing RWE across different retinal disease studies because of the heterogeneity mentioned previously regarding the different methodologies employed, as well as heterogeneity in patient populations, disease characteristics and treatment regimens.
RWE is a growing in importance as a source of information that, when evaluated appropriately, can help to support clinical decision-making. Regulators also increasingly accept that RWE may be needed in appraising post-marketing value. The retinal diseases RWE quality assessment tool can help clinicians to understand which findings from real-world studies are most robust and applicable to their practice.
--
--