• By Laura Whybra
  • Posted on Friday 15th May, 2015

Systematic reviews: are they really weighing all the evidence?

strong>Where randomized control trials are viewed as a “gold standard” for research, some have called systematic reviews a “platinum standard.” Reviews aim to be the go-to source for policymakers interested in a particular topic. So it’s crucial that they give more credit to robust study designs, and less credit to weaker ones.A recent appraisal of SRs suggests that some do – and some don’t. In a review of about 60 SRs from top health journals and the well-known Cochrane Collaboration, almost all reviews were found to include some assessment of individual studies’ risk of bias. But more than a third of the reviews did not actually use this information when drawing their conclusions.What is “risk of bias”?In the popular sense, the word “bias” is often used as a synonym for prejudice. But the scientific meaning is different. Research results are “biased” if the estimates are higher or lower than they should be – for any number of reasons. Even good, careful studies can be at risk of bias in this sense.For instance, in randomized controlled trials, some participants drop out. Perhaps they don’t find the program helpful. Those who drop out may be hard to track down for post-program measurements. As a result, studies with high attrition rates may overestimate the program’s average effect. Observational studies may also inadvertently lead to biased findings if they can’t take all the relevant background factors into account. Studies with a greater risk of bias often overestimate treatment effects, although they may also underestimate it. Although researchers try to ensure that their findings are as accurate as possible, inevitably, some results are more robust than others.Accounting for risk of biasGuidelines for conducting systematic reviews recommend that reviewers assess the amount of potential bias, known as the “risk of bias,” within each study included in the review. Understanding which studies have the most robust designs should ensure that reviewers base their conclusions on the best quality evidence available. So how does this work in practice?Researchers based in Glasgow and London assessed how 59 systematic reviews published in a selection of high quality journals and by the prestigious Cochrane Collaboration database of systematic reviews assessed the risk of bias of studies included in the review. While the vast majority (90%) of systematic reviews identified by the research team included some assessment of risk of bias, more than a third (34%) of the reviews did not actually use this information when drawing their conclusions.By compiling the findings of a review without considering how the potential bias of a study may impact the accuracy of its findings, researchers and policymakers may inadvertently draw conclusions that are not based on the highest quality evidence. Identifying risk of biasOnly half of the systematic reviews identified the risk of bias for each of the study’s findings independently. Most studies measure multiple outcomes, some self-reported (like pain) and some externally measured (like blood tests). A study that does not blind participants to their treatment or control group condition will probably bias self-reported measures more than external measures. But in about half the studies assessed, if one outcome was at risk of bias, all of the findings in that study were considered to be subject to the same level of bias. A variety of different methods were used to assess risk of bias. No one method was predominant. In around one fifth of systematic reviews, it was unclear what method had been used to assess the risk of bias. Simply elevating RCTs over other study designs is not the answer, however. Some interventions cannot be assessed with randomized designs, and much useful information comes from quasi-experimental and observational studies.The authors offer useful advice for SRs: “An overarching principle that may be helpful to remember… is to consider what the best available evidence recommends, which may not necessarily reflect the overall evidence base.”***********ReferenceKatikireddi, S.V., Egan, M. & Petticrew, M. (2014). How do systematic reviews incorporate risk of bias assessments into the synthesis of evidence? A methodological study. Journal of Epidemiology and Community Health doi:10.1136/jech-2014-204711

Back to Archives