Bent Greve, the editor, explains that this book is intended to cover “methodologies, cases and how and when to make evaluation in social policy, and its possible impacts”. Part 1 offers six chapters on the techniques, most of the chapters in part 2 discuss issues in evidence based policy making, mainly offer discussions of evidence on sixteen policy areas (though two of the seven chapters are really case studies rather than discussions of that sort), and part 3 has ten further chapters of case studies. That really means that half this book is about evaluation and evidence, and half is discussion of specific policies. The more interesting case studies, to my mind, were on medicines, long term care and interventions for children, but none is exceptional. A couple of the other case studies are discussions of policy, not centrally about evidence or evaluation at all. This does not add up to either a book on evaluation, or a handbook for people who want to do one.
The real place to start, unfortunately, is the price. This book, 520 pages plus index, costs £195 – that’s not a misprint. (NB: Since I posted this review, the publishers have told me there is a cheaper e-version at £36 on Google Play.) The price might be defended for a library purchase if this was an irreplaceable, essential read, but it’s hard for any collection of papers to meet that standard. Like most collections, it has a mix of good and not so good, and there’s a lot of competition out there. On evaluation, Parsons’s short book on Demystifying Evaluation (Policy Press, 2017) is a relatively accessible and practical guide. I still like Michael Scrivens’ Evaluation Thesaurus, even if it was published in 1991; his evaluation checklist is available free on line. On evidence and policy, there have been several recent books; I’d recommend Justin Parkhurst’s The politics of evidence (Routledge, 2016) an insightful and well-written book that explains a lot of what’s wrong with the main approach in this one. Parkhurst’s book is available in a free Open Access version.
There’s other stuff out there for free, too. The World Bank offers a virtual library of methods and case studies in evaluation, along with guides such as P Gertler et al, Impact evaluation in practice (World Bank, 2011). There are problems with the World Bank’s approach; it’s technocratic and liable to be uncritical. The objections are
- theoretical – evaluations, especially RCTs, often look at the wrong things in the wrong way;
- political: aims, methods and outcomes cannot be understood in a political vacuum;
- statistical: the variables are interdependent, the methods heavily sensitive to the statistical assumptions and results are unlikely to be replicable; and
- evidential: the available data don’t support the main methods. Garbage in, Garbage out.
Some of the writers for the Handbook refer to some of these points, but neither the criticisms nor the practicalities are dealt with thoroughly. A reader has to dig for the objections, because they are referred to in the context of the paper where they occur, not systematically – you have to get through a hundred pages, well into the chapter on systematic reviews, before you even catch the sense that the “evidence hierarchy” introduced on page 6 is contentious. There may be room for a state-of- the-art review of evaluation and policy, but this isn’t it.