evidence based policy

International Evaluation Conference

The video is now available from my guest role at the Polish Agency for Economic Development’s International Evaluation Conference.

I also wrote a summary of my position on the future of evaluation which can be found in an earlier blog post here: XII International Evaluation Conference 

Key parts are:

57:28 Erosion of Experts, Ethnography and Evaluation
59:54 Rethinking how we present data
1:00:00 This isn’t evaluator bashing – evaluators do a really good job!
1:00:01 We need a systems approach to evaluation
1:01:41 UKES’ Voluntary Peer Review scheme
1:02:52 Using Evaluation
1:04 The Evidence Base
1:14:40 Should we quit EB approaches
1:16:45-1:20 Being strategic in evaluation (whilst not quite fulfilling the question!)
1:38:36 Conclusion

XII International Evaluation Conference 

It was a great pleasure to speak at the opening of the XII International Evaluation Conference by invitation of the Polish Ministry of Economic Development, and Polish Agency for Enterprise Development yesterday (21st June 2017).

I had been asked to speak about the future of evaluation and evidence based policy and a summary of what I discussed is outlined below. 

Challenges facing evaluators at the current time

The Death of Experts. During the U.K. ‘Brexit’ referendum campaigning there were several examples of expertise being undermined by both the general public and some politicians (e.g. Michael Gove:”people in this country have had enough of experts”). At the same time we are seeing evidence, knowledge and expertise shared via internet blogs and websites; and arguably being utilised at a greater rate than other formal evidence mechanisms (evaluation reports, academic publication) – despite no guarantee of their quality. Evaluators have long struggled for legitimacy and this erosion of the expert role complicates this challenge further. 

What does the erosion of expertise mean for evaluation evidence and its future use?
How should evaluators respond to the erosion of expertise and further challenges to their legitimacy ? 

Changing consumption of data. Big data is trending right now and evaluation is following suit (Lou Davina-Stouffs of Nesta UK shared an example in Wales, UK, where data is being scraped to look at indications of SME development/improvement). Evaluators and policy makers may need to approach this data with caution. Similarly, we are seeing strength in story telling – rich qualitative methodologies may support this to be captured. This is not about polarising qualitative (ethnography, interviews) and quantitative (big data) approaches – but evaluators need to move with these changing methodologies and embrace technology to do this. We are seeing the use of audio-visual means of presentation as a valuable means to disseminate evaluation findings with better consumption than written evaluation reports (being anecdotally reported). This twitter post demonstrates this point:  

Source: @evaluationmaven

 

Mariana Hristcheva (Director-General of Urban and Regional Policy, European Commission) referred to this point later in the conference, and noted that evaluation studies of European Union funded cohesion policy initiatives are now beginning to be produced in video format.

Anyone can evaluate.  Many evaluation practitioners are engaged in evaluation societies, networks and professional development throughout the world, engaging in debate about how to advance evaluation practice; yet many are not (there is no data available to demonstrate just how many evaluators are engaged/disengaged with such practice development to support any such proportion to be known). I am preaching to the converted as it is those who are disengaged and absent from practice development (professional development or methodological) who we ought to worry about. We know the quality of evaluation varies vastly and so this lack of oversight/governance is unhelpful. 

Evaluation societies such as the UK Evaluation Society (UKES), American Evaluation Association (AEA) and European Evaluation Society (EES) are engaging in work to professionalise evaluation and we can recognise capability frameworks, guidelines for practice and, more recently, voluntary peer review scheme pilots (EES, UKES). The European Commission are also taking a more proactive role to enhance their evaluation studies offering summer schools in evaluation practice.

Wolfgang Meyer, later at the conference, highlighted the absence of renowned European evaluation scholars and theorists. US scholars from the 1980s still dominate our academic contributions and this is unhelpful to the education of evaluators and evaluation capacity-building in Europe.

Providing a consistent evaluation experience across the industry remains a challenge.

Should we abandon evidence based policy?

The evidence based policy approach continues to be challenged across numerous sectors. No, I do not think that this should be abandoned – to do so would be to abandon the very notion of evidence, knowledge and its transformative potential in policy development. However, the very fact it is being challenged might prompt us to craft a new narrative that sits behind it, and to explore further the issues we face in practicing it. 

The recent work of Newman, Cherney and Head (2017) detailed the result of a study among over 2000 Australian public servants and found that almost 60% used e-databases to search for academic abstracts, articles and reports, and just over 60% had used academic work in reports over the preceding 12 months. This suggests that the evidence base is being consulted. Although worryingly, 75% of the same respondents didn’t feel they had the expertise to apply the results – one possible avenue for improvement of the use of evidence. 

Redefining EBP is unlikely to support us to overcome the barriers to its effectiveness, pushing a clear agenda to engage all parties to systematically address the challenges that face it might help. Our struggles as evaluators to transform policy need closer inspection. We are likely to struggle to prevent evidence being manipulated for political gain but it is likely we can support the issues of cultural difference between evaluators and the consumers of evidence, seen for instance in long-standing remarks about the presentation of evaluation findings. Evaluators alone cannot solve this, it needs to be a systems approach involving policy makers, public servants, funders if relevant, and evaluators.

To abandon EBP would be to abandon faith in knowledge, learning and improvement (Wond, 2017)

But we can’t afford to wait for data can we? 

This timeliness challenge is never going to cease, the two are going to struggle to synchronize but they can be mitigated to some extent. I can’t imagine a time when society will stop to ponder the evaluation reports of the previous initiative before proceeding to the next and I do wonder whether evaluation has to step back and consider a longer term role for itself instead. Seeing evaluation as a longer term game may actually be helpful, for instance we can reflect on the way evaluation is funded (short term not supporting us to establish longer term impact) and whether this fits.