- By Kevin Mount
- Posted on Tuesday 14th October, 2008
For 30 years, Judith Gueron and her colleagues at the US Manpower Demonstration Research Corporation (MDRC) have been battling to show that evaluation by randomized controlled trial is “feasible, ethical, uniquely convincing and superior” as a means of determining whether services make a difference. They have conducted more than 30 major random assignment studies in several hundred locations, involving over 300,000 people – particularly, but not exclusively, in the area of employment and welfare reform.“I am a believer, but not, I hope, a blind one,” Gueron says. “I don’t think that random assignment is a panacea, or that it can address all the critical policy questions, or substitute for other types of analysis, or is always appropriate.” But the battle is worth fighting, she argues, because of the political and financial costs associated with lesser evaluations that too often end in methodological disputes.She quotes from Henry Aaron’s influential book Politics and the Professors where he asks, “What is an ordinary member of the tribe to do when the witch doctors disagree?” In other words, what should the public think when scientists and scholars squabble about whether research shows that a service works or not? Gueron and Aaron contend that RCTs can help to avoid this kind of conflict. But conducting real-world experiments is always difficult and requires striking a balance between research ambition and practical realism.“Creative and flexible research design skills are essential, but just as essential are operational and political skills, applied both to marketing the experiment in the first place and to helping interpret and promote its findings down the line,” Gueron says. So, how can this be done? One lesson from MDRC’s experience is about the need to convince people that there is no easier way to get the answers they want. It takes courage for political appointees to favor independent studies that measure net impact, because non-experimental studies will generally tend to show better results: impact detected by RCTs is almost always smaller than observed outcomes.Gueron cites the example of a welfare official whose program MDRC was evaluating. “The state governor had sent her [the welfare official] a press clipping, citing outcomes to praise Governor Dukakis’s achievements in moving people off welfare in Massachusetts, along with a handwritten note saying ‘Get me the same kind of results’.” High-quality policy research must continuously compete against claims for greater success based on weaker evidence.
No use being a little bit random
Another necessary step involves persuading overburdened program staff – who generally view random assignment as yet another unwelcome routine – to buy into the study. “Random assignment is an all-or-nothing process. It doesn’t help to be a little bit random. Once the process is undercut, the study cannot recover.”So, in the case of an education and training program for high school drop-outs in San Jose, California, for example, Gueron and her team argued their case with the intake staff who would need to confront potential participants. They explained that the results would be uniquely reliable and, in a climate of funding cuts, might convince the federal government to provide more money and opportunities to the young people concerned.The staff agonized in private. “Shortly thereafter we were ushered back in and told that random assignment had won. This is one of the most humbling experiences I have had in 30 years of similar research projects, and it left me with a sense of awesome responsibility to deliver the study and get the findings out.” The happy ending was that the results were positive, prompting the US Department of Labor to fund a 15-site expansion serving hundreds of disadvantaged young people.These and other tangible benefits from participation in RCTs can be used to “sell” the method: opportunity to learn; contribution to national and state policy; special funding; and the fact that the burden on program staff is often less than originally feared.Taking ethical and legal concerns seriously also helps to ensure that program staff feel comfortable with what is being done. Judith Gueron’s experience suggests, for example, that social experiments should not deny people access to services to which they are entitled or reduce service levels, that they should include adequate procedures to inform program participants and ensure data confidentiality, and that they should be used only if there is no less intrusive way to answer the questions adequately.Lastly, she gives some pointers for ensuring that RCTs have the greatest possible impact on policy. She bases these on the “unusually strong” effect of MDRC research on public policy.Studies should be “rooted in issues that matter – concerns that will outlive the tenure of an assistant secretary or a state commissioner and will still be of interest when the results come in”. Programs should be tested in real-world settings, after the extra supports of the start-up have fallen away, and, ideally, in multiple sites: “It is uniquely powerful to be able to say that similar results emerged in Little Rock, San Diego and Baltimore.”Reference
Gueron, J. (2008) “The politics of random assignment: implementing studies and impacting policy”, Journal of Children’s Services
3 (2), 14-26.Aaron, H. J. (1978) Politics and the Professors: The Great Society in Perspective
, Washington DC, Brookings Institution Press.
Back to Archives