• By Dartington SRU
  • Posted on Wednesday 26th August, 2009

Bullying programs don’t work? Shoot the messenger!

Extensive reviews of school-based anti-bullying programs published during the last five years point to the same conclusion - that they don’t work. But Canadian researchers Wendy Ryan and David Smith argue that, so far, most evaluations have been seriously flawed; we should be as ready to question the shortcomings of the evaluation procedures as any failure in the programs. They go on to suggest that attempts to assess the impact of programs on levels of bullying in schools have relied on unsuitable research designs and insufficiently lengthy follow-up periods and have imposed inappropriate outcome measures. Studies have also failed to collect sufficient information about implementation procedures or other qualitative data crucial for contextualizing the results. Examining more than 30 peer-reviewed evaluations from the last decade the authors from the Faculty of Education at the University of Ottawa, point out that none of the evaluations meets the standards set out by the Society for Prevention Research. Even when they used a less stringent benchmark, only 10% made the grade. [See: Laying new foundations for the evidence base]Writing in the journal Prevention Science, they say that only a third of the programs attempted to monitor implementation, when it was commonsense as well as good science to insist that to have the intended effect they had to be put into action properly. It is widely accepted that a randomized controlled trial (RCT) is the best way to evaluate the impact of an intervention but such a research design will always rest on certain assumptions. Most importantly, interventions must be implemented in a standardized fashion, and comparison groups and trial environments must be equivalent. Over a third of the evaluations were based on RCT but none fulfilled all of the necessary criteria. RCTs might be the best way to measure impact, but qualitative data help greatly to contextualize and interpret the results, say Ryan and Smith. However, less than one fifth of the studies they considered collected qualitative data of this type. Another prevalent weakness was the short time that elapsed between the implementation of the program and the point when bullying levels in the schools were reassessed. The SPR standards of evidence recommend a period of at least six months, in order for the intervention to overcome any teething problems. More than a quarter of the studies included did not wait this long. Outcome measures were also problematic. Nearly a third of the research studies used a single informant, when ideally levels of bullying would be assessed from the point of view of children, teachers and parents, using several methods. “This is particularly important in bullying prevention studies, as research indicates that there are systematic differences among informants on reporting rates of bullying and victimization,” Ryan and Smith comment. Comparing results was difficult because the studies used such a variety of assessment measures. The field of bullying prevention would benefit from the development of a common measure. Many studies did not use statistics coherently. Effect size - a statistic that permits easy comparison between interventions - was only given in one third of the studies, and very few of those offered confidence intervals. Certain statistical techniques that would have been appropriate for comparing schools, such as multilevel model statistics, should be used in future evaluations, they argue. Their verdict: “evaluation practices in this domain have not yet reached a level of rigor that permits us to accept their outcomes as conclusive.” Huge resources are being pumped into anti-bullying initiative in US schools without a compelling body of evidence that they work. There is a pressing need to more solid research. See : Ryan W and Smith J D (2009), “Antibullying Programs in Schools: How effective are evaluation practices?” Prevention Science, 10, 3, pp 248-259

Back to Archives