• By Laura Whybra
  • Posted on Tuesday 15th March, 2016

Finding better evidence for “evidence-based” programs

Demand for cost-effective services to improve children’s welfare and wellbeing has grown dramatically in the past twenty years. But how can commissioners select programs will benefit young people and, importantly, avoid those that might prove harmful? Registries of evidence-based programs, such as Blueprints for Healthy Youth Development, provide a rigorously researched answer. In the United States, the United Kingdom and elsewhere, efforts to reduce deficits in public finances have placed those who commission services for children and young people under increased pressure to find cost-efficient ways of achieving better outcomes. But they have also experienced confusion when confronted by recommendation lists based on different definitions of “evidence-based” programing.Sharon Mihalic and Delbert Elliot, pioneers at the University of Colorado of the Blueprints approach to assessing program effectiveness argue in a journal article that only the most rigorous criteria will enable service providers choose interventions that can confidently be taken “to scale”. Accepting a lower standard of evidence carries a greater risk of failure, with a consequent waste of taxpayer’s money

Setting a high standard

Since Blueprints was launched 20 years ago – originally focusing on violence prevention – more than 1,300 programs have been assessed, but only 54 have been certified as either “model” or “promising” interventions. To achieve “promising” status they must have achieved positive results in at least one high-quality randomized controlled trial (RCT) or two high-quality quasi-experimental evaluations.“Model” programs meet the even higher criteria of positive results in at least two RCTs, or one RCT and two quasi-experiments. In addition, they must have demonstrated a sustained, positive impact on children and young people for at least a year after the program ended.Other hurdles that programs must jump to achieve certification include evidence that they have a clear theoretical base that specifies the outcomes they expect to change, the risk and protective factors being targeted to achieve change, the groups of children expected to benefit and the way that different components work together to get results. A Blueprint program must also be ready for implementation, with manuals training and technical support available to ensure it can be faithfully replicated.Evidence-based programs registries have proliferated in response to demand for preventive services that really “work”. Yet only 10 per cent of family programs in America are estimated to be evidence based.The use of evidence-based programs in US public schools rose from 34.4 per cent of interventions implemented in 1999 to 46.9 per cent in 2008. But while nearly half of school districts were using an evidence-based program, it was the most frequently used program in just 26 per cent of them. Only 19 per cent were found to be implementing evidence-based programs with fidelity.

Raising the bar higher still

To illustrate the importance of creating registries of evidence-based interventions that apply high standards, Mihalic and Elliott cite the example of a program whose Blueprint certification had to be removed following the publication of new research evidence. In 1998, CASASTART, targeting drug use and delinquency amongst high-risk 11 to 13-year olds, was rated as ‘promising’ on the basis of the results of a randomized trial. However, a later RCT, conducted by the Blueprints team themselves, found no positive effects and – in the case of girls – some harmful effects.Ineffective programs have a negative impact on targeted children and young people, whose well-being does not improve (or is actually made worse) and on public finances carrying the costs of a project that does not work. But the authors also insist that those who provide registries of evidence-based programs must keep constant track of new evidence to maintain confidence in their endorsements. As they say: “We cannot afford to take programs to scale prior to rigorous testing.”Looking ahead, they note a proven tendency for RCTs carried out by independent investigators to find less positive results than those conducted by the researchers who orignally developed the program. They suggest a future goal for registries of effective practice should be to insist that the highest level of accreditation are reserved for inteventions that have been independenly replicated.Introducting a requirement for independent replication could – once more such evaluations take place – bring further clarity and certainly to the difficult choices facing service commissioners.


Reference

Mihalic, S. F., & Elliott, D. S. (2015). Evidence-based programs registry: Blueprints for Healthy Youth Development. Evaluation and Program Planning, 48, 124-131.

Back to Archives