• By Kevin Mount
  • Posted on Saturday 18th October, 2008

Why must they break it before we’ve fixed it?

Among the many conundrums in children’s services that prevention science is struggling to overcome is this one: relatively few programs have passed any robust test of their effectiveness and those that have are even less likely to be implemented than ones that can claim no empirical support at all.Mulling over this disconcerting paradox, Brian Bumbarger and Daniel Perkins from the US Prevention Research Center at Penn State University have come to the conclusion that there is an urgent need for “interventions to enhance readiness for interventions”.The wider problem of how to integrate programs that have demonstrated efficacy into mainstream policy and practice is an aspect of the bigger challenge facing what is being called Type 2 translation research.In particular, Bumbarger and Perkins consider the obstacles hindering implementation of a growing number of programs have been shown to be effective at reducing drug use and delinquency among young people.“First, these evidence-based programs are still under-utilized compared to prevention strategies with no empirical support. Second, when effective programs are used the evidence suggests they are not being implemented with quality and fidelity. Third, effective programs are often initiated with short-term grant funding, creating a challenge for sustainability beyond seed funding.”In offering solutions, they draw on their experience of a large-scale dissemination effort in Pennsylvania. Since 1998, the Pennsylvania Commission on Crime and Delinquency (PCCD) has invested over $60 million in approximately 140 replications of various evidence-based interventions, from elementary school social-emotional learning programs to community mentoring and universal “family strengthening”.Among their recommendations are, first, that policy makers should consider the question: “Is this program being planted in fertile ground?” In the PCCD initiative, few sites had thoughtfully considered this central issue of readiness. It led to start-up problems, such as poor buy-in from schools and teachers for classroom-based programs, difficulty hiring staff, under-budgeting for training and struggles to recruiting the target population.It is therefore necessary, they argue, to address the challenges that practitioners face when implementing interventions – for example by taking the time to identify community needs and to assess the stability of the implementing agency’s infrastructure and the skills and support of administrators and implementers. Hence the need, they say, for “interventions to enhance readiness for interventions”.Secondly, Bumbarger and Perkins point to a need to support communities and schools through targeted training and continuous support to promote “higher quality and sustainability”. Research demonstrates unequivocally, they argue, that implementation quality affects outcomes. Yet in a 2007 survey of all the Pennsylvania sites funded since 1998, nearly 40% reported they had not implemented the program as designed. Of these, 55% had made adaptations that could reasonably be expected to reduce program effectiveness (by undermining the programme’s theory of change). Examples included deleting lessons because of limited classroom time, and changing the target population in response to difficulty in recruiting families.Bumbarger and Perkins contend that if we know that absolute fidelity never occurs in natural conditions, arguing for it is futile. [See: You’re going to be unfaithful, so why not make it part of the service?] Instead, they advocate “adaptation with fidelity”. Pennsylvania therefore provides targeted technical assistance to each new grantee through the Prevention Research Center. “The conventional approach of ‘train them and send them on their way’ is ineffective in promoting high-quality implementation because there is generally little or no follow-up after training to assist implementers in navigating barriers. Effective program adoption and implementation require initial training that is interactive and engaging, provides opportunities for behavioral rehearsal, and is followed up with ongoing coaching, technical assistance and support.”Third, they say that “it is important to develop and pursue a concrete sustainability plan early on that goes beyond simply seeking the next temporary funding source”. Often there is funding to initiate programs but it is left to the grantee figure out how to sustain them. Consequently, many grant-funded prevention programs fizzle out after the initial funding period. This is misleading as well as unhelpful: during the start-up phase a program may not be functioning at a level to generate positive outcomes; nipping potentially successful programs in the bud is likely to erode community trust and willingness to be innovative.Strategies implemented in Pennsylvania appear to have been effective at promoting sustainability for the PCCD-funded programs. In a 2007 survey of ones that had been “off” grant funding for two or more years, 76% reported they were still operating. A strategy of escalating “local match” funding over four years (100% in years one and two, 25% in three and 50% in four) encouraged communities to identify sustainability funding sources early in the implementation process and also to provide a gradual reduction in grant support to provide ample time to put the sustainability plan in place. Plans should typically include attention to organization infrastructure capacity and nurturing “champions” inside or outside.

Fitting the pieces together in Ireland

Bringing the US recommendations across the Atlantic, Nick Axford and colleagues from Dartington Social Research Unit, UK, describe an attempt in Ireland to implement a robust program of research into children's health and development, to design new services, evaluate their impact (using RCTs) and integrate the results into policy.Critical to the task, Axford and colleagues suggest, is building collaborations between researchers, families, neighborhoods and service agencies. In other words, there is a need to connect prevention science and community engagement. Doing this well, they argue, requires a long-term arrangement as improvements may be slow. But the vicissitudes of political cycles, the high turnover of staff in children’s services, knee-jerk reactions to major scandals and the understandable impatience for change by those living in disadvantaged communities mean that achieving a long-term outlook is difficult.They also argue that using rigorous service design methods contributes to the rigour of evaluation methods (specifically RCTs). In the Northern Ireland project, service design and evaluation methods were built into an approach that required the specification of outcomes, the target group and a logic model. Putting evidence at the heart of the work, the say, gave participants a vocabulary to understand the benefits and drawbacks of experimental design.Lastly, like their colleagues in the US, the Dartington team recommend that high levels of technical assistance must be available to help participants with service design, implementation and evaluation throughout the process. Most of the sites received a lot of help until proposals for full funding were agreed but this tailed off during the detailed service design and manual preparation phases and disappeared altogether for most sites during the implementation phase. This pattern of assistance was not planned and, not surprisingly, it proved counterproductive.ReferencesAxford N, Morpeth L, Little M and Berry V. (2008) “Linking prevention science and community engagement: the case of the Ireland Disadvantaged Children and Youth Programme”, Journal of Children’s Services 3 (2), pp40-54.Bumbarger B K and Perkins D F (2008) “After randomised trials: issues related to the dissemination of evidence-based interventions”, Journal of Children’s Services 3 (2), pp55-64.

Back to Archives