• By Dartington SRU
  • Posted on Monday 26th March, 2012

Installing a “flight recorder” into effectiveness trials

In 1953, when one of the world's first commercial jet airliners, the famous De Havilland DH-106 Comet, mysteriously crashed in mid-flight, it looked like the age of commercial air travel would stall before it even took off. Engineers pondered the cause of the crash, but there were few clues and no witnesses or survivors. If only there had been a way to find out what happened in the plane just prior to the crash, they might have discovered vital clues. The tragedy of the DH-106 Comet sparked the invention of the “black box” flight data and cockpit voice recorders, which are now mandatory for all commercial airplanes.It is widely accepted that randomized controlled trials (RCTs) are the most rigorous way to evaluate the impact of an intervention. However, evaluation findings are often perplexing – what works in one context does not work in another. And RCTs that collect purely quantitative data may not have the “flight recordings” of what happened during the trial that can help us understand variations in outcomes. Qualitative data such as interviews collected during or straight after a trial help to put the results into context and interpret them, say a Norway-based team of researchers. But good qualitative work alongside RCTs remains rare. A case study of a Cochrane Review on the use of lay health workers found that less than a fifth of the effectiveness trials had qualitative data of this type available.The aim of the study was to find out why the Cochrane Review, which included 82 RCTs conducted on the use of lay health workers, was unable to conclude whether the approach was effective or not. For example, while lay health workers were found to increase the use of breastfeeding and childhood immunization, they had little or no effect on numbers of people completing preventive treatment for tuberculosis. Further, the results of the RCTs were too mixed to draw firm conclusions on many other health outcomes.For each of the 82 RCTs, the researchers, led by Claire Glenton of the independent research organization SINTEF, contacted the trial authors, checked the trial papers for any references to qualitative research, and searched the literature to identify any qualitative research done alongside the trials that could help identify what happened during the interventions that affected the outcomes. The team discovered that for 63% of the studies no qualitative research was ever conducted, and for 20% of the trials where some qualitative data had been collected, it was either unavailable or it had been done prior to the intervention so it could not be used to explain the outcomes of the trials. For only 14 (17%) of the trials, qualitative research had been done during or very shortly after the trial. To get some insight into the varying results of the RCTs, these 14 studies were examined.The 14 qualitative studies offered some clues into what had affected the program outcomes. For example, both the lay health workers and participants identified “shared experiences” as the strength of the approach. Conversely, participants in one study said that different values and different experiences of illness made the intervention less useful. In another study, lack of regular supervision and support from community and health professionals for the lay health workers was identified as a hindrance. While these qualitative findings give a rough glimpse into the failings and the accomplishments of the intervention, in general, the data were too sparse and the methods and results of the qualitative research were too poorly described to produce any confident conclusions. Unfortunately, the mystery of the mixed results of the 82 trials remains mainly unsolved. The authors conclude that quantitative and qualitative methods should be used in parallel in effectiveness trials not only to find out what works and what doesn’t, but to understand why. They admit that this will be difficult because of “the attitudes of funding bodies and the attitudes and skills of the research community” and because “when mixed methods are used, lack of time or experience as well as journal formats may prevent findings from qualitative studies and trials or reviews of effectiveness from being integrated or presented together.”However, if the age of evidence-based interventions is not to stall before it can properly take off, it may help to install a “black box” flight recorder in every trial. Perhaps qualitative research is best seen not as an optional add-on to effectiveness trials, but rather as a resource that can be used to improve understanding of interventions, interpret findings, and improve future implementation and intervention design.**********References:Glenton, C., Lewin, S., & Scheel, I.B. (2011). Still too little qualitative research to shed light on results from reviews of effectiveness trials: A case study of a Cochrane review on the use of lay health workers. Implementation Science, 6, 53.Lewin, S., Munabi-Babigumira, S., Glenton, C., Daniels, K., Bosch-Capblanch, X., van Wyk, B.E., Odgaard-Jensen, J., Johansen, M., Aja, G.N., Zwarenstein, M., & Scheel, I.B. (2010). Lay health workers in primary and community health care for maternal and child health and the management of infectious diseases. Cochrane Database of Systematic Reviews, Issue 3. DOI: 10.1002/.CD004015.pub3.

Back to Archives