Real-world data can be used to compare efficacy of MS treatments

But researchers must account for 'confounding bias' in registry data

Marisa Wexler, MS avatar

by Marisa Wexler, MS |

Share this article:

Share article via email
A person weighs two medicine choices, one suspended above each hand: a capsule versus a syringe.

Real-world data collected from a registry can be used to reliably compare the effectiveness of different multiple sclerosis (MS) treatments, so long as appropriate methodologies are employed to account for the messiness — what researchers call confounding bias — of real-world data, according to a new study.

While registries or observational studies can never be substituted for randomized clinical trials — rigorously designed scientific studies testing a therapy’s effectiveness — the researchers found that using statistical tools could allow for a less-costly comparison of MS treatment options.

“Although confounding bias resulting from the lack of randomisation cannot be eliminated, our findings suggest that rigorous target trial emulation could provide informative results for comparative effectiveness research,” the team wrote.

Their study, “Emulating randomised clinical trials in relapsing-remitting multiple sclerosis with non-randomised real-world evidence: an application using data from the MSBase Registry,” was published in the Journal of Neurology, Neurosurgery and Psychiatry.

Recommended Reading
A doctor uses a laser pointer to highlight a sign that reads

Tysabri best of 6 DMTs to prevent relapses, worse disability in MS

Comparing MS treatments outside of clinical trials

Disease-modifying treatments, or DMTs, are anti-inflammatory medications that have been proven to slow the progression of MS. There are now more than a dozen DMTs widely approved to treat relapsing types of MS.

The gold standard to compare the effectiveness of different DMTs is a randomized clinical trial, in which researchers carefully enroll patients matching a specific set of criteria, and randomly assign them to receive one of two treatments, or an active treatment and a placebo.

But actually running head-to-head trials is expensive and logistically challenging, so these sorts of studies are rarely done after a treatment’s approval, except when testing a new medicine against an older standard-of-care therapy.

Some studies have tried to use real-world data to compare the effectiveness of different therapies — which can offer certain advantages. Namely, real-world data has the benefit of being more reflective of most patients’ experiences with using a treatment in their day-to-day lives. But such data, which can be impacted by a wide range of biasing factors, are inherently much more complex to use than are results from well-controlled clinical trials.

To deal with this complexity, researchers have used statistical techniques such as propensity score matching to make data comparisons.

Put simply, propensity score matching aims to identify sets of patients who received different treatments, but otherwise are very similar in terms of characteristics such as age, sex, and co-occurring health problems. By carefully matching similar patients, researchers can approximate the design of a controlled clinical trial.

At least, that’s the theory. There’s been debate, however, as to how well this statistical strategy can actually replicate what would be found in a clinical trial.

To test it out, an international team of scientists conducted a series of tests aiming to prove that propensity score matching can generate results comparable to clinical trials for MS treatments.

“The main aim of our study was to determine whether emulating an existing [randomized clinical trial] in a registry dataset yields findings consistent with the original trial,” the team wrote.

Recommended Reading
Several hands are shown holding up capsules and tablets as well as a prescription medication bottle.

Smell Test Might Help Predict if MS Treatments Are Working

Results similar for clinical trial and real-world study

The scientists took advantage of data from a Phase 3 clinical trial called TRANSFORMS (NCT00340834), which tested the then-experimental medication Gilenya (fingolimod) against the older, and at that point already approved DMT Avonex (interferon beta-1a).

The trial, which enrolled more than 1,000 people with relapsing-remitting MS (RRMS), ultimately found that the risk of relapse was reduced by about 51% with Gilenya compared with Avonex.

Now, using a global registry called MSBase, the team identified more than 4,000 people with RRMS who were treated with either Gilenya or Avonex between 2011 and 2021. All of the identified patients met criteria for inclusion in the original TRANSFORMS study.

The team then applied propensity-score matching, leaving them with 856 patients on Gilenya matched with an equal number of similar patients on Avonex. Statistical tests comparing outcomes between these two groups showed that the risk of relapse was about 45% lower with Gilenya than Avonex — very close to what was found in the original clinical trial.

The similarities between the [randomized clinical trial] results and our emulation findings suggest that target trial emulation using existing observational data could serve as a powerful tool for comparative effectiveness research in MS.

In both the original trial and this analysis, rates of confirmed disability worsening were low, overall, with both therapies, and the risk of disability worsening did not differ significantly between Gilenya and Avonex. Safety data for both therapies were likewise similar in both the clinical trial and the real-world analysis.

“The similarities between the [randomized clinical trial] results and our emulation findings suggest that target trial emulation using existing observational data could serve as a powerful tool for comparative effectiveness research in MS,” the team concluded.

The scientists stressed, however, that these types of statistical comparisons can never fully eliminate the risk of biases, and noted that researchers need to be very careful about using appropriate statistical tools to get reliable results.