• By Dartington SRU
  • Posted on Wednesday 07th July, 2010

You want to know how well your prevention program is working? We can almost tell you

It’s like clockwork. When the economy tanks, money for public services takes a hit. Now, more than ever, it’s important for services to prove their worth. The spotlight hopefully will favor evidence-based programs. But there is an important caveat here. In tough economic times there is always a push to do more with less. Evidence-based programs need to be careful about cutting corners in how they deliver their services when they are expanded in the community. If they don’t stay true to their original model, they may not be able to deliver the benefits either for the people they serve, or the public purse. The key is to be honest about progress, which means being rigorous about monitoring the program. Writing in a special edition of the journal Early Childhood Research Quarterly, Celene Domitrovich explains that “replicating these interventions with quality is challenging because often resources are more restricted in community settings as opposed to universities.” She goes on to say how several factors can undermine the implementation process. But successfully measuring implementation is not easy. Domitrovich acknowledges that the science of implementation research (sometimes called type II translation) is still finding its feet. Her latest research, conducted with colleagues at the Prevention Research Center in Penn State University where she is assistant director, offers up some guidance for those in the field.First, it’s important that you measure the right thing. Domitrovich says there are at least eight measurable facets of implementation. Four she considers key: dosage, fidelity, generalization and participant engagement. Dosage is the amount of exposure the participants have to the program. Fidelity is the degree to which the core components of the program are delivered as prescribed by program developers. Generalization refers to the extent to which deliverers are able to transfer the strategies taught in the program to real life situations. Participant engagement speaks for itself. Second, assessments of the facets of implementation should ideally be made by more than one source. Those responsible for delivering the particular program are not considered good independent judges. They have an incentive to inflate ratings of their own implementation quality. No one likes a negative evaluation. In their own study, Domitrovich and her colleagues found that “teachers’ ratings of child engagement were so near the ceiling of the rating scales that they provided virtually no useful information”. Trainers or coaching consultants who provide implementation support could be called upon to do second assessments. Ideally, of course, if resources allow it, an independent observer is preferred. Third, assessments should be an ongoing process. The overall quality of program implementation is likely to increase the more it is delivered. Practitioners become more skilled with practice and should gain greater “deep structure” knowledge. If this is not happening, it is cause for concern. Only an ongoing assessment process can track this. Domitrovich says that ongoing assessments aren’t necessary in every case. For school-based curriculums, where lesson timing has to be built regularly into the school day, dosage is likely to remain stable over the duration of the program. But for other types of interventions, such as parenting groups, dosage may fluctuate over time in accordance with attendance and parent attrition rates. Domitrovich says it’s important look before you leap. Make sure that an assessment is actually needed before you spend money on it. All of these conclusions came from the implementation study her group did on the Head Start REDI program. REDI (Research-based, Developmentally Informed) - a comprehensive preschool curriculum delivered in Head Start classrooms. REDI has been shown to have positive effects on language, literacy and social-emotional skills. “The data collected as part of the REDI trial allowed for initial exploration of the “who, what and when” of implementation” says Domitrovich. “The findings suggest that there are different patterns of implementation quality that appear to vary as a function of the intervention being assessed”. But she stresses that further research is still needed to figure out how implementation can, and should, be monitored. Are there some facets of implementation that deliver more bang for the buck? Prevention Action will be following this questions and the others that come up regarding type II translation in the months to come.ReferencesDomitrovich, C.E., Gest, S.D., Jones, D., Gill, S. and DeRousie, R.M.S. (2010). Implementation quality: lessons learned in the context of the Head Start REDI trial. Early Childhood Research Quarterly, 25, p284-298.

Back to Archives