• By Dartington SRU
  • Posted on Tuesday 21st April, 2009

Learning to stay in touch with the creator

There is increasingly compelling evidence from Type 2 translation research that the quality of “aftercare” program designers offer the people implementing their routines in schools can be the difference between success and failure, especially when the activity is taking place simultaneously across a number of sites.A team from the University of Kansas led by Charles Greenwood spent a year monitoring how well schools were delivering a computer-aided, evidence-based peer tutoring program in nine schools, across five US states. They found that early implementers were keenest to communicate with the program originators. Getting started on time at the beginning of the school year was another aspect of integrating the program well. Jay Buzhardt and his team from the Juniper Gardens Children’s Project suggest, conversely, that starting late may mean teachers and students establish alternative routines and schedules that are hard to break.The focus of their study was a “technologically-enhanced” version of Class Wide Peer Tutoring, one of the Best Evidence Encyclopedia’s top-rated programs for improving math education in elementary schools. Other obstacles to consistent performance included poor IT support, allocating the program inadequate time (due to a lack of administrative backing) or simply overburdening the site coordinator. Rather than simply investigating whether schools implemented the program whole, the researchers chose to gauge the “rate of implementation” by monitoring how long it took for each of 12 key tasks to be completed. This meant they could compare fast and slow implementers. The “technological enhancements” included in the new program version meant the team could monitor implementation more easily, for example by automatically gathering information on downloads of software, data entry and communications. Of the nine schools, six implemented the program fully within a year. The time taken ranged from 30-50 weeks. Each component took something between 2.5 and 7.5 weeks to put into practice. As Buzhardt and colleagues acknowledge, the small scale of the study limited the conclusions that could be drawn. They recommend further research to establish the exact relationship between communication and successful implementation. Does good communication equal smooth implementation, or is poor communication a symptom of slow implementation? They also point out that the current study did nothing to examine communication methods. Were reminders sent? Were communications positive? Timing is another issue that needs further examination. Starting on time was found to be important but other components may warrant equivalent attention – and assistance. Class Wide Peer Tutoring develops teamwork and mutual support. At the beginning of each week students are put into pairs. The pairs are then put into two teams. Partners take turns as tutor or student. Students are able to earn points for their team by responding to tasks given to them by their partner. At the end of the day and the week, the winning team is announced. The original evaluation, which took place in six schools in deprived areas of Kansas City, compared children participating in CWPT with “business as usual” conditions. Their progress was also compared with children from richer areas not using the program. The results showed that kids participating in the program performed significantly better than the control group. Not only that, they fared no worse than kids from more privileged backgrounds. The fact that the evaluation did not use a strict randomized controlled trial design means the results cannot be considered watertight. Schools were randomly selected to deliver CWPT but the analysis was carried out at child-level. See: Greenwood C R, Delquadri J C I and Hall R V (1989), “Longitudinal Effects of Classwide Peer Tutoring,” Journal of Educational Psychology, 81, 3, pp 371–383 and Buzhardt J, Greenwood C R, Abbott M and Tapia Y (2006), “Research on Scaling Up Evidence-Based Instructional Practice: Developing a Sensitive Measure of the Rate of Implementation,” Educational Technology Research and Development, 54, 5, pp 467-492.

Back to Archives