• By Dartington SRU
  • Posted on Monday 26th July, 2010

A measure for measure

While there is evidence about what works with various interventions for children, looking at outcomes is a common but simplistic way of attempting find whether an intervention has been successful.There are three problems with this method. Take, for example, the supposed effectiveness when improvements in anxiety levels or depression are observed. First, children and young people needing mental health services typically show high measures of difficulty. However, their scores reduce in the short term simply because of errors associated with random measurement. Second, it is quite usual for people to report more difficulty in their first interview with a clinician than in later meetings. But, third, childhood disorders fluctuate quite naturally and children are often referred when their problem is at its peak. Such facts dictate that we should be cautious about how we estimate how much a child’s mental health has improved.To offer a more robust and reliable approach, a team of child development researchers has tested one solution to the problem of evaluating the effectiveness of individual interventions by developing a computer algorithm that essentially acts as a “silent” control group against which individuals can be compared. It operates in much the same way as the height and weight charts used to monitor children’s growth - it provides a range of expected values at points in time against which one can plot an actual measurement. The formula uses robust population wide data gathered using the Strengths and Difficulties Questionnaire (SDQ), a validated screening measure of emotional and behavioural difficulties. This data charts the mental health problems of a large sample of children in the community longitudinally, allowing researchers to estimate expected change over time. By comparing this control data against the observed scores for an individual child who has received a mental health intervention, it is possible to compute how effective the intervention has been. To test the accuracy of the algorithm, the researchers pitted it against the results of a randomised controlled trial of an early intervention programme for parents of children with behavioural difficulties. This RCT was selected because it used the SDQ to assess the outcomes for the children and had a follow-up assessment at a point after the intervention as made.The assumption was that if the algorithm worked then it would be able to predict accurately the effectiveness for both the control group and the group receiving the intervention. By definition, this would be zero (0) for the control group, since they had not received the programme. For the intervention group the algorithm’s prediction should reflect the effect size found by the RCT (0.37). The algorithm predicted an effectiveness of -0.03 for the control group and 0.36 for the intervention group, virtually identical to the RCT’s effect size. The findings provide a good deal of hope that this kind of computer programme might be able to assist clinicians to monitor better the progress of the children and young people they serve in the future.Despite these successes, the authors advise caution about the applications of the tool. The algorithm is in the early stages of testing and replication studies are required before firm conclusions can be drawn. Second, the SDQ is a broad measure of children’s mental health difficulties and clinicians will want to supplement the method with instruments that check specific disorders. Finally, they emphasise that the best assessment of the effectiveness of interventions will always involve a combination of measures and the cross checking of data from different sources. ReferenceFord, T., Hutching, J., Bywater, T., Goodman, A. and Goodman, R. (2009) Strengths and Difficulties Questionnaire Added Value Scores: evaluating effectiveness in child mental health interventions.

Back to Archives