• By Dartington SRU
  • Posted on Thursday 11th February, 2010

Time to roll out the computer model

Much in prevention science is based on a downbeat principle: roll out for wider public use a program that has proved its efficacy in trials and the results will almost certainly be far less impressive.This inevitable “attenuation” of effects is acknowledged as surely as the fundamental argument in favor of early intervention: that early environment has “a potent effect” on the capacity of human skill development. Neuroscience, behavioral research and economics all agree.Both home truths in combination provide the basis for a review of latest thinking on the problems of “going to scale” with early crime prevention programs, just published in the US journal Prevention Science.Brandon Walsh at Northeastern University teams up with Christopher Sullivan at the University of Cincinnati and David Olds at the Prevention Research Center for Family and Child Health at the University of Colorado to make the case for a more sophisticated assault on the roll-out puzzle, for example by subjecting the processes and the surrounding “weather” to computer modeling.The challenges, they argue, are well enough known. Efficacy studies occur in optimal contexts frequently with circumscribed samples of targeted cases. Effects are always likely to be diluted when a program intended for an at-risk population is more generally disseminated.The arguments associated with “fidelity” of implementation have been well-rehearsed, too. “Sometimes what is perceived as a potentially flexible element of a program is, in fact, its essence,” the researchers write.Ground conditions may be inauspicious; infrastructure may be lacking, and resources for implementation are unlikely ever to be sufficient. Clinical trials may overlook mediating or moderating factors in their enthusiasm for testing for benefits under optimal conditions.Walsh, Sullivan and Olds also acknowledge a more perplexing finding, assimilated in some cost-benefit analysis, that larger effects are shown when researchers are involved in the design and implementation of interventions. They report two competing explanations:“One is the cynical review, which suggests that these researchers have a personal stake in the program or are pressured to report positive results. The other is the high fidelity view, which holds that larger effects are a product of the researcher being able to attain a high degree of fidelity to the model.”Similar ambiguities may arise when developers evaluate their own achievements. Suspicions of bias may arise. But neither are independent evaluators immune: they may be seeking to “get a scalp”.The researchers go on to discuss the few solid studies of scale-up “penalties” and “discounts”, including a 1998 forerunner of Steve Aos’s influential approach to cost-benefit analysis developed for the Washington State legislature.That set the trend by investigating whether “the social resources that will be expended a decade or more from now on incarcerating today’s youngsters could instead generate roughly comparable levels of crime prevention if they were spent today on the most promising social programs”.A decade later, the trio conclude, research on taking programs to scale continues to be hindered by a lack of information to guide understanding of the potential results – “beyond a descriptive accounting of the outcomes and context in which the program was previously implemented, as well as some basic knowledge of the prospective implementation sites and populations”. The reality is an enormously complicated interaction between factors arising from the setting, the target population and much else: just the sort of impenetrable stew that computational models are built to distil.They argue:

    Specifically, a simulated experiment could be undertaken where different scale-up conditions could be added to the model, and then the effects could be observed after a number of simulations. In cases where there is uncertainty around those inputs, multiple models using variants on characteristics of the site, population, and level of adherence to the intervention protocol could be run to examine the conditional results. This might provide some initial insight into the likely prospects for going to scale. In cases where the intervention is, in fact, scaled up the real-world results could be examined in relation to those that were forecast by the simulation models as a means of validating the model.
    While computational modeling could not be the sole component of such a research agenda, this approach offers an opportunity to examine the potential ramifications of taking a program to scale in a relatively inexpensive and efficient manner.
See: Welsh B C, Sullivan C J and Olds D L, "When Early Crime Prevention Goes to Scale: A New Look at the Evidence" Prevention Science, published online, 21st November 2009.Reference: Donohue, J J and Siegelman, P (1998), “Allocating resources among prisons and social programs in the battle against crime”. Journal of Legal Studies, 27, pp 1–43.

Back to Archives