In 1997, when a newly-elected UK Labour government set out to champion evidence-based policy and practice, it seemed that the relationship between evidence and the production and delivery of public policy might become linear, rational and transparent.There was already a solid evidence base to support the big investment in Sure Start. In future, the newly-established national institutes for health and social care (NICE and SCIE) would simply be able to harvest high quality evidence, meta-analyze it and publish guidance for best practice. Surely practitioners would adopt the guidance with a grateful smile.Ironically, these assumptions about “evidence” ignored convincing evidence from another quarter suggesting that, in reality, the relationship between research and policy is complicated by a brew of mediating factors bubbling up from the murky world of day-to-day politics.Not to mention the ground conditions in those “swampy lowlands” of everyday practice – where practitioners habitually draw on tacit and experiential knowledge to supplement whatever they might acquire from the academy or laboratory.A new edition of the Journal of Children’s Services edited by Professors Nick Gould and Ian Butler from the University of Bath, UK, examines the nature, quality and use of evidence in the development of children’s services.Their starting point is that children’s services bring together people from a range of professional and disciplinary traditions, occupational cultures and political orientations. They carry with them contrasting experiences of using evidence and different ideas about its ingredients, value and usefulness. Just as there is a variety of policies, so there is a variety of “evidences”.Nancy Cartwright, a philosopher from the London School of Economics, points to fundamental questions that any adequate theory of evidence must address: how can the concept of evidence can go beyond the randomized controlled trial without sacrificing rigor? how can the relative utility of evidence from other methods of investigation be assessed? how can evidence produced from different perspectives be combined?In their article on research into the implementation of a new information system in statutory children’s services, Ian Shaw and Jasmine Clayden, from the University of York, argue that the very methods of data collection embodied in the seemingly neutral medium of an information management system privilege and legitimate a particular conception of evidence and its relationship to practice.They go on to make the case not for retrieving any lost, golden age of “authentic” social work practice, but for more open acknowledgment of how policy works, and recognition that barefoot practitioners improvise, adapt and interpret systems to produce and consume knowledge.Nick Midgley from the Anna Freud Centre in London meanwhile examines what he calls the “implementation gap” between evidence-based practice and evidence-based practitioners in children’s services.He identifies two camps: the “disseminators” who continue to adhere to a belief that persistence in placing the pearls of evidence before swinish practitioners will eventually pay off, and the “revisionists” who take a more radical approach to rebuilding the relationship.Ray Jones, from Kingston University, UK, continues the themes of complexity and incrementalism in an overview of the development of UK childcare legislation during the last 60 years. He shows how change can rarely if ever be explained in terms of a single event or driver; certainly it cannot be reduced to the impact of any specific “breakthrough” developments.Amidst the “noise” in the processes of reform, generated for example by lobbying, scandals about poor provision, emerging anomalies within legislation and wider changes in social values, Jones nevertheless detects a growing impact of research and information.Finally, from the University of Colorado, criminologist and sociologist Del Elliott considers lessons learned from the Columbine High School massacre. He points to the failure to implement effective violence prevention programs in schools, and explains how the adoption of proven interventions is often trumped by local political factors, expediency and the failure of implementers to grasp the relative merits and robustness of competing forms of evidence.Contributors to the special edition do not reflect any consensus. However, three overall messages emerge.