As fidelity is absorbed into the accepted wisdom surrounding successful program implementation, so monitoring it for quality has the potential to become a growth industry, research at the University of Washington suggests.Abby Fagan at the Social Development Research Group has identified four key elements: adherence to core content; correct dosage as to the number, length and frequency of sessions; quality of implementation in terms of the knowledge, readiness, enthusiasm and skills of those charged to achieve it; and participant responsiveness. Since high fidelity is not “naturally occurring,” it needs to be cultivated, but, contrary to the patrician arguments of some of her colleagues, Fagan says it doesn’t much matter if the people involved are researchers or practitioners. What counts is that they should be good listeners and skillful persuaders, facilitators and problem-solvers. A basic knowledge of the programs helps, as does an abundance of patience. Ideally, the role should be performed by a team. Why these relational skills are such an important part of technical assistance became apparent during the evolution of a Washington State project that used the Communities that Care (CTC) prevention operating system to select and implement programs. Policy makers and senior practitioners needed convincing that fidelity was a way to guarantee best outcomes and to optimize return on investments. Fagan and her colleagues responded by holding a workshop on the issue with key stakeholders and hammering home the message. They had going in their favor a call for greater accountability from service funders and the public, which meant they could sell the monitoring process to implementers both as a means of demonstrating success and collecting constructive feedback. Implementers completed questionnaires at the end of each session and at the end of the program. They were asked how much time had been spent on different tasks, what obstacles they had encountered, what modifications to the materials had been made, and how responsive participants were. Only nine of the 16 programs being implemented already ran checklists, so a more generic measure was also applied. Some sessions were observed and implementers were rated on different aspects of fidelity – partly as a way of counteracting the risks of bias inherent in self reporting. An end-of-program survey of program coordinators established whether tutors had the relevant skills and knowledge prior to delivery, how much time had been spent supervising tutors and whether any major modifications to materials had been made.