• By Dartington SRU
  • Posted on Thursday 21st May, 2009

What happens to science when program meets product?

As prevention scientists go in search of a step-change in the rate of adoption of evidence-based programs, they face a classic dilemma.  For the very best of reasons they want market penetration, but in a country that holds free market competition so dear they can’t really hope to achieve it without relying on the expertise of publishers and broadcasters – in other words, on non-scientific forces beyond their control. Much was said about “fidelity” during our West Coast study tour and much of the discussion focused on the demands made of scientists and clinicians to ensure the the core components of a program are not lost in translation.  But when programs are sold in the market place, as they commonly are in the US, they become shelf products. You can buy the DVD from a website. And what happens to fidelity then? So, to take a hypothetical example. Supposing I develop a program with the goal of preventing substance abuse. I get it tested by randomized controlled trial – quite likely, several. I assemble solid evidence of effectiveness and in the process I get to know what the irreducible core qualities are. Five years on, I’m ready and eager to bring it to scale, to roll it out from Melbourne to Montreal, and – for life is so short! – there are other research fish I want to fry.  What should I do next? One very appealing proposition would be to persuade a specialist publisher to add my program to his accredited list. I’m an OK scientist but marketing is really not my area. I may know good product design when I see it, but I’m no designer. I love my Iphone but I’m not Steve Jobs.  So I find a very reputable distributor. I put my program manual on the table and I lay my scientific principles on the line. The publisher is five steps ahead of me; he knows the research; he knows the proven models literature; he very readily allows me to keep ownership of the scientific control. “We need each other,” he says. The day soon comes when I see my program on the publisher’s shelf and suddenly I realize it’s made a journey I didn’t quite foresee – from the chrysalis to the moth. I don’t recognize the result. The question is, who is now to say whether the product that’s for sale is the program that was tested? Where does fidelity fit in under these new merchandising conditions? There are already cases where scientists are complaining that publishers are changing their products to meet perceived demand or to exploit a gap in the market. If core components were being changed, that would be deplorable; clearly there could be no justification for altering dosage or skimping on demands for “readiness”. But surely we’re in unknown territory here. There simply was no “product” before the publisher got involved. There is no standardized effectiveness measure for packaging. As one researcher explained the underlying anxiety to me last week, "The marketing people know how to sell products in boxes, not how to talk to people about changing systems." By “systems” he was referring to the organizational structures in health, education, social care and juvenile justice within which programs and other services sit. I don’t know of any publisher who sells products that change systems."Suddenly we saw we might be able to make a difference" It’s easy to see why the journey I caricature here is so appealing to the eminent prevention scientists who have bought into it, but also, with hindsight, why it is so flawed. "You have to remember,” my US colleague told me, “thirty years ago we had people saying quite seriously that 'nothing works' when it comes to preventing or addressing behavioral, emotional and drug-related problems among young people. “When we academics started to do experiments and to find effects, we were suddenly in the precariously exciting position of being able to make a difference in the real world. But the people doing that knew nothing about bureaucracy. "So, ten years ago it seemed very attractive to have a publisher's 50-plus strong marketing department selling these programs. Now we know that they can market products, and they can market to systems people, but they can't market systems to systems people. They do not have a strategy for distributing things that are more complex, for instance, because of a requirement for extensive staff training." There are two salutary reminders for those seeking means of promoting the widespread implementation of evidence-based programs. First, it is important to find out if something works before trying too scale-up. "You need efficacy trials before trying to force something down people's budgets." Second, you can never have scientific control of a “product”.

Back to Archives