• By Kevin Mount
  • Posted on Friday 09th November, 2007

Learning the moral of the Sure Start story

This is a tale from the UK about one program and two evaluations. One evaluation left scientists struggling to say anything concrete about impact, with the result that parts of the British press complained that public money had been squandered. The criticism left Government politicians clutching at straws. They had made much of the program's success and now they didn’t know if it had worked. They were faced with having to say either that the scientists were wrong, or that the program would work in the future – that is, if it hadn’t worked yet. The second evaluation produced clear findings about gains for the children and families receiving the program. These results gave policy makers the confidence to roll it out more widely, so that more and younger children would benefit. So why the disparity?Sure Start was established in the UK in 1997 to reduce child poverty and social exclusion. ?300m was invested in delivery in the first three years. There was no prescription about what to provide; local providers were allowed the flexibility to decide what local services were most needed. Targets were specified but methods of meeting them were discretionary. There was to be no curriculum and no handbook.Furthermore, against the advice of most of the research community, it was decided not to invite evaluation by random allocation. The resulting evaluation design was widely regarded as being the best available given the constraints and it was executed with scrupulous care. Data were collected from 16,052 families in Sure Start Local Programme Area (SSLP) and 2,610 families in comparison ‘Sure Start to be’ areas.That study generated important and helpful findings on a range of subjects concerning the delivery of children’s services. However, the eminent child and adolescent psychiatrist Professor Sir Michael Rutter described the effects identified after three years as ‘meagre’ and ‘disappointingly slight’. The evaluators concluded that, ‘when compared to children in Sure Start-to-be areas, those from relatively less disadvantaged households residing in SSLP [Sure Start Local Programme] areas benefit somewhat from living in these areas’.Scientists were left scratching their heads. Were three years not long enough for the benefits to show? Or was it because of the research design? Had Sure Start’s true impact been missed or, worse still, was it just too slight to be detected. Depressingly, the weight of opinion went with Michael Rutter: "not too much hope should be placed on the possibility of definitive results later," he wrote.So this was the English Sure Start study, since amplified by similarly ambivalent findings from work in Durham. Now back to the second tale – from Wales. As the Sure Start program became established there, managers sought the advice of Dr Judy Hutchings, a researcher and practitioner with a track record of work on parenting and conduct disorder. She recommended using The Incredible Years BASIC parenting program, developed by Carolyn Webster-Stratton. Rooted in 30 years of research, it has been identified by the US Office of Juvenile Justice and Delinquency Prevention as a model ‘Blueprint’ program for violence prevention. It met the criteria and thus provided a means of delivering Sure Start.The Incredible Years started in North Wales in 1998. It was targeted at parents of high risk three- and four-year-olds. Twelve groups from 11 Sure Start areas across North and Mid-Wales took part in a modest randomized controlled trial with follow-up at six, 12 and 18 months post-intervention. At follow-up, the program group children showed improvements on most of the measures of parenting and problem behavior, with gains greater than for the control group. It worked.What is the moral of this story? First, there is much to be said for clearly-defined services delivered consistently. They can be evaluated. By contrast, as Michael Rutter explains, "programs that lack an explicit curriculum and that are varied across areas in a non-systematic fashion are impossible to evaluate in a manner that gives answers on what are the key elements that bring benefits". Second, as Michael Rutter put it, "randomized controlled trials [RCTs] provide a much better test than non-experimental methods (however rigorous the statistics applied to the latter)". Confidence in the results concerning effectiveness was far higher for the Wales evaluation than for the English one despite the relatively small size of the Wales evaluation. Indeed, the strength of the findings encouraged the Welsh Assembly to fund training in the program nationwide as part of its parenting action plan. Yet RCTs remain contentious in children’s services, at least outside the US, with objections often raised on the grounds of cost, ethics, practicability and scientific validity. See also:How Wales gave Sure Start a convincing beginning Sure Start made more credible by success of Incredible Years

Back to Archives