evaluation

International Evaluation Conference

The video is now available from my guest role at the Polish Agency for Economic Development’s International Evaluation Conference.

I also wrote a summary of my position on the future of evaluation which can be found in an earlier blog post here: XII International Evaluation Conference 

Key parts are:

57:28 Erosion of Experts, Ethnography and Evaluation
59:54 Rethinking how we present data
1:00:00 This isn’t evaluator bashing – evaluators do a really good job!
1:00:01 We need a systems approach to evaluation
1:01:41 UKES’ Voluntary Peer Review scheme
1:02:52 Using Evaluation
1:04 The Evidence Base
1:14:40 Should we quit EB approaches
1:16:45-1:20 Being strategic in evaluation (whilst not quite fulfilling the question!)
1:38:36 Conclusion

XII International Evaluation Conference 

It was a great pleasure to speak at the opening of the XII International Evaluation Conference by invitation of the Polish Ministry of Economic Development, and Polish Agency for Enterprise Development yesterday (21st June 2017).

I had been asked to speak about the future of evaluation and evidence based policy and a summary of what I discussed is outlined below. 

Challenges facing evaluators at the current time

The Death of Experts. During the U.K. ‘Brexit’ referendum campaigning there were several examples of expertise being undermined by both the general public and some politicians (e.g. Michael Gove:”people in this country have had enough of experts”). At the same time we are seeing evidence, knowledge and expertise shared via internet blogs and websites; and arguably being utilised at a greater rate than other formal evidence mechanisms (evaluation reports, academic publication) – despite no guarantee of their quality. Evaluators have long struggled for legitimacy and this erosion of the expert role complicates this challenge further. 

What does the erosion of expertise mean for evaluation evidence and its future use?
How should evaluators respond to the erosion of expertise and further challenges to their legitimacy ? 

Changing consumption of data. Big data is trending right now and evaluation is following suit (Lou Davina-Stouffs of Nesta UK shared an example in Wales, UK, where data is being scraped to look at indications of SME development/improvement). Evaluators and policy makers may need to approach this data with caution. Similarly, we are seeing strength in story telling – rich qualitative methodologies may support this to be captured. This is not about polarising qualitative (ethnography, interviews) and quantitative (big data) approaches – but evaluators need to move with these changing methodologies and embrace technology to do this. We are seeing the use of audio-visual means of presentation as a valuable means to disseminate evaluation findings with better consumption than written evaluation reports (being anecdotally reported). This twitter post demonstrates this point:  

Source: @evaluationmaven

 

Mariana Hristcheva (Director-General of Urban and Regional Policy, European Commission) referred to this point later in the conference, and noted that evaluation studies of European Union funded cohesion policy initiatives are now beginning to be produced in video format.

Anyone can evaluate.  Many evaluation practitioners are engaged in evaluation societies, networks and professional development throughout the world, engaging in debate about how to advance evaluation practice; yet many are not (there is no data available to demonstrate just how many evaluators are engaged/disengaged with such practice development to support any such proportion to be known). I am preaching to the converted as it is those who are disengaged and absent from practice development (professional development or methodological) who we ought to worry about. We know the quality of evaluation varies vastly and so this lack of oversight/governance is unhelpful. 

Evaluation societies such as the UK Evaluation Society (UKES), American Evaluation Association (AEA) and European Evaluation Society (EES) are engaging in work to professionalise evaluation and we can recognise capability frameworks, guidelines for practice and, more recently, voluntary peer review scheme pilots (EES, UKES). The European Commission are also taking a more proactive role to enhance their evaluation studies offering summer schools in evaluation practice.

Wolfgang Meyer, later at the conference, highlighted the absence of renowned European evaluation scholars and theorists. US scholars from the 1980s still dominate our academic contributions and this is unhelpful to the education of evaluators and evaluation capacity-building in Europe.

Providing a consistent evaluation experience across the industry remains a challenge.

Should we abandon evidence based policy?

The evidence based policy approach continues to be challenged across numerous sectors. No, I do not think that this should be abandoned – to do so would be to abandon the very notion of evidence, knowledge and its transformative potential in policy development. However, the very fact it is being challenged might prompt us to craft a new narrative that sits behind it, and to explore further the issues we face in practicing it. 

The recent work of Newman, Cherney and Head (2017) detailed the result of a study among over 2000 Australian public servants and found that almost 60% used e-databases to search for academic abstracts, articles and reports, and just over 60% had used academic work in reports over the preceding 12 months. This suggests that the evidence base is being consulted. Although worryingly, 75% of the same respondents didn’t feel they had the expertise to apply the results – one possible avenue for improvement of the use of evidence. 

Redefining EBP is unlikely to support us to overcome the barriers to its effectiveness, pushing a clear agenda to engage all parties to systematically address the challenges that face it might help. Our struggles as evaluators to transform policy need closer inspection. We are likely to struggle to prevent evidence being manipulated for political gain but it is likely we can support the issues of cultural difference between evaluators and the consumers of evidence, seen for instance in long-standing remarks about the presentation of evaluation findings. Evaluators alone cannot solve this, it needs to be a systems approach involving policy makers, public servants, funders if relevant, and evaluators.

To abandon EBP would be to abandon faith in knowledge, learning and improvement (Wond, 2017)

But we can’t afford to wait for data can we? 

This timeliness challenge is never going to cease, the two are going to struggle to synchronize but they can be mitigated to some extent. I can’t imagine a time when society will stop to ponder the evaluation reports of the previous initiative before proceeding to the next and I do wonder whether evaluation has to step back and consider a longer term role for itself instead. Seeing evaluation as a longer term game may actually be helpful, for instance we can reflect on the way evaluation is funded (short term not supporting us to establish longer term impact) and whether this fits.

 

Trust in Evaluation: Just Published

My most recent peer-reviewed paper is now available in the ‘International Journal of Public Administration’, at: http://www.tandfonline.com/eprint/fC7UbGSJBEdsxJ99bADp/full (available to the first 50 readers).

“Trust Matters: Distrust in an External Evaluation of a Public Sector Program, explores how distrust can emerge within the (external, programme) evaluation relationship. This relationship can be a challenge to evaluators and yet has been relatively under-explored.

The paper was a challenge as it was based on an auto-ethnographic methodology. I do believe that this sort of methodology can be useful, presenting a warts-and-all account of phenomena and really digging deep into the possible reasons (in turn prompting further research questions). However, it can feel and sound a little self-indulgent – the second reviewer took some convincing.

Emerging from the paper, is:

  • …the notion of dis-trust manifesting itself in evaluation in two ways: through discourse, and action.
  • …confirmation that evaluation can be vulnerable to trust issues. Previous work has identified conflict and uncertainty within the evaluation setting, but besides Nigel Norris, few researchers have recognised trust as an issue.
  • …the perpetuation of meta-evaluative debate. Something I think we need in order to develop evaluation practice, and ensure/enhance its effectiveness.
  • …further research potential around trust and other aspects of organisational behaviour that present themselves within evaluation (this links with my current work exploring anxiety in evaluation, and links to XEA – Excessive Evaluation Anxiety (Donaldson, 2002)).

Happy to discuss operating in an evaluation environment, maximising evaluation effectiveness or auto-ethnographic methodology further (with researchers, evaluators or funders).

Tracey

An Introduction to Evaluation Part I: The Formative/Summative Dualism

Scriven’s (1980) formative and summative dichotomy is arguably the most accepted and renowned evaluation typology. One could suppose that the formative/summative dichotomy is therefore fundamental to understanding evaluation. In this post, the concepts of formative and summative evaluation will be explored.

The label, formative evaluation, refers to an evaluation approach that focuses on evaluation improving a programme, essentially allowing an evaluator an interactive (rather than independent) role (Herman et al, 1987). Reporting occurs throughout an evaluation (rather than at the end or at a certain point) meaning closer and sustained contact with a programme (Clarke, 1999). Summative evaluation focuses on formally reporting findings at a certain point in time.

Prescott et al (2002) emphasise that the two approaches are separate and distinct processes. According to Patton (1996), the formative/summative dichotomy, ‘captures the entire array of evaluation purposes’, as it suggests that, ‘anything that is not formative is summative’.

There is much criticism of the summative/formative dichotomy in modern evaluation: ‘the world of evaluation has grown larger than the boundaries of formative and summative evaluation’ (Patton, 1996). Debate exists over which of these approaches is most vital, Cronbach (1966) suggests that formative evaluation is more important that summative evaluation, whereas Scriven (1967) notes strengths of the summative approach. Patton (1996) also suggests that, ‘formative evaluation rests in the shadow of summative’.

At the simplest level, the dichotomy offers a labelling system to categorise evaluation. Indeed, various labels are applied to evaluation (Scriven’s dichotomy included).  McKie (2003) finds that evaluation labels and typologies are excluding to those who are not familiar with the terms (for example stakeholders within the programme being evaluated). Further, Ussher and Earl (2010) suggest that the terms summative and formative can be confusing, however, they do note some value in applying labels: ‘the identification, definition and consistent use of specific labels are useful for developing understanding and communicating with others’.

Where evaluation is dual-level, that is, an evaluation is required at both national and local levels (Allen and Black, 2006), both summative and formative types could still be utilised. A formative evaluation approach could be applied to local-level evaluation with the increased likelihood of the evaluation being used to generate improvements (according to Herman et al, 1987). A summative evaluation type might occur at a national-level evaluation with the likely audience being policymakers, funders and the public according to Herman et al (1987).

This post has explored formative and summative evaluation (in the context of public programmes). I will save some of the more specific debates of evaluation’s purpose , for instance, of evaluator independence (should an evaluator be assisting a programme to improve through formative evaluation? etc.) for another post.

Part II of the Introduction to Evaluation series (available soon) will consider the history of evaluation.

Power and Evaluation

It could be argued that research on power and evaluation is still relatively underdeveloped – for instance, Pawson and Tilley (1997, p.20) highlight a ‘failure to appreciate the asymmetries of power’ within the evaluation context. Yet, many of the issues in evaluation (utilisation of, access) appear to arise from issues relating to power. For this reason, I briefly want to consider Yukl’s, French and Raven’s, and Greene and Elfrers’ theories of power and apply this to the evaluation context. 

Power

Burton and Thakur (1995, p.354) define power as ‘the ability or the potential ability of a person or a group to influence another person or group’. In this way there are two observable parties the the influenced, and the influencers. French and Raven concur, stating that ‘the phenomena of power and influence involve a dyadic relation between two agents’ (2001, p.61). Abma and Widdershoven’s definition also suggests multiple parties in a power dynamic:

Every social relationship involves power. Power refers to the possibility of letting someone do something he or she would not do otherwise. Power is thus relational, and parties in a power relationship are tied to each other by mutual dependency (Abma and Widdershoven, 2008, p.212).

In the context of the evaluation relationship, the primary parties are the evaluator and evaluated (initiative, project, authority), however, other parties also exist (for example the community or beneficiaries).

Various origins and forms of power are presented within available literature. French and Raven (1959 in Moorhead and Griffin, 1989, p.358) identify five power bases from which power can originate: ‘legitimate’; ‘coercive’; ‘reward’; ‘expert’ and ‘referent’. Similarly, Greene and Elfrers’ (1999, p.178) boast analogous categories but extend to include ‘connection’ and ‘information’ power. Yukl (1998) offers three origins of power; broadly correlating with those proposed by French and Raven, and Green and Elfrers, these are ‘position power’, ‘political power’ and ‘personal power’. The power bases typified by French and Raven, Greene and Elfrer, and Yukl are applicable to the relationships and interactions occurring through evaluation. The table below provides a summary of these power typologies and their relevance to evaluation.

Characteristic Power type Evaluation influence
Authority, delegated, formal, within organisation Legitimate Power (Green and Elfrers, French and Raven)Position Power (Yukl)Political Power (Yukl) An evaluator can be given authority to exert power over others perhaps by directing resource. Politically evaluator knowledge and findings means that their position is well regarded.
Control sanctions Coercive Power (Green and Elfrers, French and Raven)Position Power (Yukl) An evaluation function can instigate the most punishing sanction if used for governance purposes. Evaluation findings can ultimately prevent the continuation of a programme.
Control reward Reward Power (Green and Elfrers, French and Raven)Position Power (Yukl) The reporting of positive results may lead to rewards for the programme concerned.
Specialist skills/knowledge Expert Power (Green and Elfrers, French and Raven)Information Power (Green and Elfrers)Personal Power (Yukl) The skills possessed and learned throughout the evaluation experience provide an evaluator with information or expert types of power.
Social need to be liked to influence others, personality  Referent Power (Green and Elfrers, French and Raven) This may present difficulties for an evaluator with issues of independency clashing with personal needs.
Networks – both internal and external Connection power (Green and Elfrers)Political Power (Yukl) Evaluators should become well connected within the programme setting maintaining good relationships with members of these networks. There are many power consequences if these are maintained, or oppositely neglected.

Position Power and Evaluation

Yukl’s notion of ‘position’ power refers to the authority that is delegated to stakeholders within an organisation, mirroring French and Raven’s (1959), and Greene and Elfrers (1999) concepts of ‘legitimate’, ‘reward’ and ‘coercive’ power. Position power gives the possessor (in this case evaluator) formal authority, status and control, over operations and other stakeholders, to a specified extent.

In the case of an evaluator, there could be said to be position power, with their work being capable of influencing the continuation of a programme. This position power gives the evaluator formal authority to act on behalf of other stakeholders according to Clarke (1999, p.26), who asserts ‘evaluators have a moral responsibility to act as advocates for powerless stakeholder groups’.

There is also a risk that position power is wrongly exerted, creating friction and conflict amongst stakeholders (Weiss, 1972; Gordon, 1991; Davies, 1999; Sell et al, 2004; Abma and Widdershoven, 2008). Formalising the power structure, including where an evaluator sits among this may be worthwhile; as such position power may also refer to a role within a political structure (Ridgeway, 1991; Sell et al, 2004):

Within an organisation the allocation of status assists in operational and power structures. The formalisation of power through such a structure/status has been found to assist the authority of power players to be accepted (Sell et al, 2004, p.47).

Evaluators should not necessarily be seen as the subordinates of managers, commissioners or other stakeholders, although and here lies a potential power struggle which needs to be managed for a healthy stakeholder relationship. Poor clarity of the role and purpose of evaluator only adds to this issue: ‘it is useful to realize that clients may have specific ideas about what evaluation means and how an evaluation is supposed to be conducted’ (Stecher and Davis, 1988, p.22):

The key question for any local evaluator remain the same, how does one engage with such networks in which issues of power and difference often remain unacknowledged (Diamond, 2005, p.179).

Understandably, there is opportunity for an evaluator and another delegated power-player to disagree. An imbalance of ‘knowledge’ and power between funders, initiatives and participants may cause complications (McKie, 2003, p.321; Huberman, 1990; Lincoln, 1994). Clarke (1999, p.15) remedies this by suggesting that ‘it is essential at the outset that an evaluator obtains a clear understanding as to what the client requires from the evaluation and develops the evaluation research design accordingly’. The type of evaluation being conducted influences greatly the importance of power in the evaluator/stakeholder relationship.

Reward Power and Evaluation

Coercive and Reward power allows the holder to impose sanctions or entice subordinates with rewards. Coercion in a power context has been discussed by a host of exchange and political theorists (Lasswell and Kaplan, 1950; French and Raven, 1959; Molm, 1997; Sell et al, 2004). Generally (in the UK), bureaucratic mechanisms such as processes, procedures or legislation limit the allocation of rewards and sanctions (Burton and Thakur, 1995). The perceived legitimacy of coercive power may also affect its degree of acceptance, if it is felt that power has been exerted unfairly then conflict may occur.

A sense of forcing or getting another to do a certain task, in a certain way or for certain purposes is fraught within evaluation politics (Frohock, 1974; Mohan and Sullivan, 2006). It may be an evaluator attempting to have their findings used within the policy decision-making process, or programme staff trying to get evaluators to report favourably on the programme in question.

Expert Power 

Expert power considers the specialist knowledge or skills of the holder giving them power and control over others. Evaluators may seen to have expert power, as experts in evaluation research. Burton and Thakur (1995, p.357) also explain that a compromise between the expert and managers can sometimes be overcome by ‘power sharing’, ‘when a manager is aware that subordinates possess significant experience, it is common for the manager to allow these subordinates the exercise of considerable power’. 

But you forgot Referent and Connection Power! 

Referent power and connection power relate to relationships, networks and social needs. These are interesting areas and bring into play aspects of evaluator independence. Whilst I have researched a great deal on these and written about them I would like to conduct more research to look at them in greater depth – watch this space. 

Academics as evaluation experts

I have been pondering what makes an evaluation expert? It comes back a little to the age old debate over academics versus practitioners and has been sparked by some evaluation tenders that I have been looking at.

With a PhD that focused on evaluation practice, and experience of conducting and devising methodologies for several evaluations, I could be considered an evaluation expert. Yet, is this to say that a consultant with dozens of evaluations completed is any more, or less, of an expert?

  • Academics bring: research excellence, experience, knowledge from theory (likely to be more so than consultants), a concern for quality to protect the university reputation, extra resource through student researchers, other commitments to their institution, academics are well-linked and can often access to experts in other areas
  • Consultants bring: research knowledge, evaluation experience (perhaps more so than academics), other commitments to their other clients, business acumen.

Finding your evaluation expert may mean looking to a university, or it may not.

For anyone requiring an evaluation I would suggest:

  • Do look to consider the benefits of numerous types of evaluators (self employed consultants, larger private consultancies, universities) and discuss with them their research approach to ensure that it fits your needs;
  • Do ask for references or examples of work previously conducted. This might not just be evaluation research, there are some very strong academics and researchers who may not be experienced in evaluation, but who can use research principles effectively;
  • Do try and involve your local university for advice or when producing a tender for the research work (not necessarily the evaluation itself), universities are strong in research, and evaluation is research (I am happy to work with organisations to ensure that a strong evaluation brief is created);
  • Do consider alternative options, perhaps discuss how a student could conduct your evaluation. A PhD studentship might be one solution, meaning that you sponsor a student to complete their PhD study, and in return you gain a research-savvy individual who will conduct your evaluation research over a period to suit you (around 3 years).