• By Dartington SRU
  • Posted on Tuesday 15th May, 2012

Back to the drawing board

For the past 17 years, Mark Fraser and Maeda Galinsky have been designing and developing both universal and selective programs to address anti-social, aggressive behavior in children. Through several studies, they have shown that Making Choices, a brief social problem-solving skills intervention reduces aggressive behavior, builds social competence and improves the concentration of school children.Augmenting this with Strong Families, a family intervention which takes place in the home and is designed to improve the parenting skills of parents with higher risk children, substantially increases effect sizes.This work grew from Fraser and Galinsky’s interest in the findings of universal public health programs. From this they learned to identify factors related to specific social problems, and to develop program theories in which malleable risk and protective factors were matched to change strategies.They identified, for instance, children’s social skills and relationships with others of their age as a potentially malleable link in the relationship between early aggressive behavior in childhood and poor developmental outcomes in adolescence.Now, drawing on their experience, they suggest up to five steps for the design and development of other programs.The first is to develop problem and program theories. It might be difficult, for instance, to reduce poverty but services can affect parenting, which is a route through which poverty affects children. Having specified the problem, the task is to define the key features of a suitable intervention, including the level at which it is applied (child, family, community) and by whom it will be delivered. Second, those designing programs should specify their structures and processes. This includes creating manuals, which are “typically composed of an overview and session-by-session content, and elective activities which may be used to reinforce core content.”Third, there is the need to test the efficacy of the program at the right moment. The findings from such tests may suggest strengthening some intervention components and eliminating others. Fraser and Galinsky argue, however, that such tests should only be considered once the intervention has been tested and found to be fully feasible in a given setting, and once it is clear that it is fits with the program theory and is potentially effective when implemented with fidelity.Fourth, the fledgling intervention should be tested for effectiveness in a variety of practice settings. These should be “real life” – that is, settings in which the researchers have substantially less control in implementing the intervention. “Although researchers do not directly provide the intervention, they often remain in charge of training, data collection and analysis,” the authors argue. “The aim here is to estimate a treatment effect when a program is implemented as it might be in routine practice i.e. in settings where some practitioners adhere to treatment manuals and others do not; in which organizational support may wax and wane, and where the exigencies of policy changes, budget cuts and differential leadership may erode the delivery environment.”Fifth, and only at this stage, should the program materials and findings be disseminated.When working on Making Choices and Strong Families the authors learned many hard-won lessons about the design and development of interventions.One is that interventions should be designed from the start for implementation by certain people in particular settings. In the case of a school-based program, for example, it makes sense to ensure that program content is consistent with national and local curriculum requirements. Another lesson is that the use of manuals alone is insufficient to ensure implementation fidelity “Full and faithful implementation requires ongoing support and training,” Fraser and Galinsky argue.Next they highlight the importance of what they term “randomized design.” “The importance of using a randomized design trumps all other measurement and data analysis issues,” Fraser and Galinsky believe. This includes such statistical methods as regression-discontinuity designs. “Randomization balances groups on unobserved heterogeneity… no statistical adjustments have this capacity.” A patient, one step at a time, approach yields the most fruit. Fraser and Galinsky urge program developers to, “refine interventions over time in sequenced experimentation. In the early stages of the design of an intervention, single-group studies with qualitative measurement may provide more useful information than experimental studies with quantitative measures.”And, for evaluators, they stress that, “even with randomization, post-assignment attrition can compromise the balance between experimental and control groups.” Developers and implementers should therefore work hard to avoid people dropping out of the program.Last, programs need cultural and contextual adaptation, just as some foods recognizable by brand name from country to country actually cater for different palates. In prevention science, adaptation involves altering the program content to improve its relevance to populations defined by their socio-demographic features and risk status. Thus, before Making Choices was implemented in China all references to “baseball” were changed to “soccer,” and new material was added to “address culturally or contextually based risk factors that might interfere with the uptake of intervention content.”***********Source: Fraser, M. W., & Galinsky, M. J. (2010). Steps in intervention research: designing and developing social programs. Research on Social Work Practice 20(5), 459-466.

Back to Archives