• By Dartington SRU
  • Posted on Wednesday 05th August, 2009

If it’s not a "model" will it ever fly?

In the parlance of prevention science, the difference between "model" and "effective" programs generally comes down to the fact that one will have been validated in a randomized controlled trial conducted by an independent evaluator; the other will have passed a similarly stringent test but it will have been conducted by the developers themselves.Given the protocols associated with good scientific method, the differences between the findings might be expected to be minor. But there is increasingly compelling evidence – witness an article by the Cambridge University criminologist Manual Eisner – that independence is vital.Recent independent evaluations have failed to find significant effects arising from trials of the highly regarded prevention programs such as Triple P and the Olweus Bullying Prevention Program.Eisner’s study deals with evidence-based programs (experimental and quasi-experimental) designed to reduce recidivism. He found that when the evaluator was the program developer, the average effect size was reported to be .47; when the evaluator was an outsider it was zero. Similarly disconcerting, when the Anglo-German research partnership of Andreas Beelman and Friedrich Lösel conducted a meta-analysis of social skills programs, they found that when teachers or other professional delivered them, the effect size averaged .29; when program developers and their staff did the job themselves, it yielded .49. Depending on how such variations are interpreted, they strengthen the argument for fidelity – or cast doubt on the credentials of prevention science.So Manual Eisner suggests that the disparity in the results from prevention and intervention trials in criminology could be due to the far superior implementation quality when the program developer is conducting the trail. But it could just as well be the result of systematic bias.The second explanation would have dire implications. Could conflict of interest – financial or ideological – induce evaluators to manipulate the design of a study or the data to increase the likelihood of program success? Could it be that they don’t even know when they are doing it, and that unintentional cognitive bias is in play? Public trust will falter and policy makers will pull back unless this issue is addressed, Eisner argues, and he suggests imposing measures to assess the probability of systematic bias based on vested interest. He also makes the argument for tighter quality control of data, and for independent third party replication of data analysis. Otherwise, the evidence-based mantra runs the risk of being dismissed as nothing more than a device for the times. See: Eisner M. (2009). “No effects in independent prevention trails. Can we reject the cynical view?” Journal of Experimental Criminology, 5(2), pp 163-183.

Back to Archives