• By Dartington SRU
  • Posted on Monday 01st November, 2010

A cynical view?

There is no denying that the bank of ‘evidence-based’ programs and interventions is growing. The increasing importance that governments and commissioners of services now place on adopting interventions that ‘work’ has led to more evaluative work than ever before. But is this accumulated ‘evidence’ about program effectiveness as objective as it ought to be? A common sticking point for programs seeking to be recognised as exemplary or ‘model’ interventions on databases of effective services – indicating that we can be confident in the effects claimed - is the requirement for at least one independent trial (see: Why is independent scrutiny only "desirable"?). Most evaluations are carried out by developers of programs or they are involved in some form. This opens the evaluation up to a conflict of interest. Conflicts of interest, according to Manuel Eisner, Deputy Director of the Institute of Criminology at the University of Cambridge, refer to situations where the main “publicly-declared interests of an individual compete with a person’s secondary private interests”. Researchers who have branched out into program design have a potential conflict of interest, says Eisner, when their role as “disengaged and sceptical truth finders” competes with their role as “enthusiastic advocates of a specific program… linked to significant financial and organizational stakes”.There is a growing body of evidence apparently demonstrating the magnitude of this problem in program evaluation. In a review of evaluations of offender prevention programs where external teams had been responsible for the evaluation the mean effect size was zero. However, when the developer of the program was also the evaluator the mean effect size was 0.47, a sizeable effect. Findings from research conducted using studies in psychiatry and biomedical journals indicate that published trials with a declared conflict of interest are 4.9 times more likely to report positive findings and 10-20 times less likely to report negative findings than those with no such interest. As Eisner reports, well-known programs are not exempt from these difficulties. There have been numerous evaluations of a well-known treatment program for young people with serious behavioral problems, for example, with program developers involved in all but three. Two of the three independent trials found no positive effects for the program while the third demonstrated sustained effects two years after the intervention.Similarly, Eisner and colleagues conducted an independent trial of a popular parenting program. While numerous developer-led evaluations of this program find significant mean effects on child behavior, the independent trial found no positive effects on any aspect of behavior. An independent trial of a well-regarded bullying prevention program fell into the same category.So, what are the explanations for this disparity? While it is plausible that the reduced effects found in independent trials may be attributable to reduced fidelity or quality of program delivery when the developer is not responsible for the implementation, Eisner’s argument is that we cannot dismiss the cynical view. That is, positive findings in developer-led studies may be the result of systematic bias in the design, analysis and interpretation of the evaluation. Eisner proposes a theoretical framework for thinking about how bias might infiltrate an evaluation. The model distinguishes between, on the one hand, a conflict of interest that leads to biased design, analyses or publication through intentional misconduct from, on the other, that which leads to these outcomes via unintentional bias by unconsciously affecting judgements and decisions. Two main sources of conflict of interest are financial and ideological. Financial interests may comprise royalties, research funding or income from the distribution of the program or associated training. The predilection for lists of evidence-based policies and programs may have undermined developers’ ability to be objective or disinterested, since many of these databases are used to inform decision making at the regional or national level. Ideological interests arise when researchers hold particular views about core issues in their field of study. In areas where there are divides or polarised debates, a researcher’s allegiance to one side or the other can “lead them to take on an advocacy role, which conflicts with their role as disinterested scientist” says Eisner. Moreover, these paradigms are often used as ‘evidence’ that inform prevention strategies and policies.Eisner’s conclusion is not to dismiss all developer-led evaluations as biased. Rather, he proposes strategies for understanding the problem better. For example, being transparent about conflicts of interest in publication is essential so that this can be examined in meta-analytic reviews. Monitoring and publishing data on implementation quality will also assist with understanding whether differences in effect are due to better/worse delivery of the programme.A further solution might be the development of a checklist of bias-prone practices in evaluation research. This could inform publishing and meta-analytic review procedures by providing knowledge about aspects of the design process, data collection, analysis and reporting that are most subject to manipulation.In the absence of these strategies, the cynical view holds that the evidence-base that has been built up in prevention research is unable to provide a true estimate of the effect of interventions because the majority of evaluations informing it are potentially biased. The logical extension of this view is that public resources may not be directed to best effect and that scientific progress will be thwarted. There is the danger, Eisner says, that new researchers will learn and perpetuate these practices. Source: Eisner, M. (2009) ‘No effects in independent prevention trials: can we reject the cynical view?, Journal of Experimental Criminology, 5, 163-183.

Back to Archives