Updating the DAC Evaluation Criteria, Part 7. Let these ideas die!

The many discussions I have recently had about the effort that is now in place to get to DAC evaluation criteria 2.0 have prompted me to write a few more posts in this series. It will also be a bridge between the earlier series and my new posts on Evaluation for Transformation.

In May 2018 I weaved my keynote presentation at the UK Evaluation Society Conference around the notion that convention can hold us back without us noticing the detrimental effect of outdated ideas on what we do; some ideas have to die to accelerate the evolution of a field, especially when new circumstances demand (sometimes drastically) new ways of doing and thinking.

Can we agree on some in the field of evaluation? And should we let them die a slow death from natural evolution or hasten their demise?

This came to mind as I was reading the delightful book This Idea Must Die: Scientific theories that are blocking progress, to which I already referred in a previous 2016 post where I argued that we need the same type of book for evaluation. In 2-4 page chapters, each of the renowned authors provokes by tackling widely accepted convention in science through topics such as the “power of statistics”; the “myth of cognitive agency”; “large RCTs”; “statistical independence”; “bias is always bad”; “cause and effect”; “absolute truth”; “evidence-based medicine’; “things are either true or false”; and “the magic of big data”.

The book has several hundred of these very short chapters, many relevant to our own field of evaluation or M&E / MEL / MERL / MEAL / MERAL, as we tend to call our practice in the South. Good food for thought for the global evaluation community.

Ideas in evaluation that are dying too slowly

Some outdated ideas in evaluation are almost dead, for example, the need to choose to between qualitative and quantitative designs and/or methods. But most are dying too slowly: the following immediately come to mind as particularly irritating because there is so much evidence that should point us in other directions:

One, that randomized control trials (RCTs) are at the top of an “evidence hierarchy” – an idea which is taking too long to die, as ITAD points out as well as many other very credible authors.

Two, that evaluation is primarily about “programme evaluation”, or that individual projects and programmes should continue to be the most prominent evaluands. Evaluation should be conceptualised in terms of “supporting development” at national, regional and/or global level – now more important than ever given that the whole world has to “develop” in line with the 2030 Agenda for Sustainable Development.

Three, that “impact” is meaningful without any attempt to discover (i) whether there is a fair chance that the positive impacts are likely to sustain or enable further (positive) ripples, and (ii) whether negative impacts of interventions or negative consequences of actions might be neutralising those perceived as being positive.

Four, that one can get “best practice” and “replication” in highly complex social and societal contexts.

Five, that we can continue to make evaluative judgments across cultures and contexts without making clear the reasoning and values behind such judgments.

I am sure we can all add several more that are already dying, yet still not fast enough for the field to move on with vigour. This can be an interesting exercise which will definitely lead to a lot of good debates about the merit of certain arguments.

Ideas that are not yet dying, but probably should

Among the long list of ideas that I hope will still die in my lifetime, but are not (yet), are the following three that I feel particularly strongly about (they relate directly to the two listed in the section that follows):

One, that only stakeholders’ interests and needs should determine our evaluation questions and criteria (as discussed earlier in this series of posts).

Two, that dominant theories, models and narratives around development are superior to others and therefore should direct how we think about change when we develop our theories of change and evaluate for impact. These usually originate in highly regarded universities and think-tanks in North America and Europe, often perpetuated through a powerful system of multilateral organisations – as demonstrated by examples around economic growth (e.g. neoliberalism), institution building, capacity strengthening and so on.

Three, that evaluation consists primarily of “ex ante”, “process” and “impact” evaluation. Yet evaluation can be about so much more than this, and can be so differently conceptualised given the many different types of priorities (e.g. transformational change), approaches (e.g. developmental evaluation; emerging outcome harvesting); types of uses (see Michael Patton’s list of around 12 types of use in the 4th Edition of Utilization-focused Evaluation) that we have to attend to.

I hope to explain in later posts why I consider these three ideas as in dire need of killing off.

“New” ideas that have to die – and quickly

Two very important interconnected ideas are emerging that I would like to see die quickly if we want to advance in line with the demands of an era where the SDGs and the Fourth Industrial Revolution intersect:

First, that complexity is “too complex”, so it is best to ignore it – in other words, that we continue to fail to recognise the importance of working with systems and complexity because it is “too difficult” to do so. Or that we can brush it off as only to do with “experimentation”, “adaptive management” or “adaptive learning” – shifts that are already quite difficult to make within rigid management and evaluation systems that prize results-based management, logframes and linear, frequently overly simplistic theories of change.

Second, that (systems) transformation is a buzzword, soon to be forgotten – in other words, that we do not need to think about transformational change when we evaluate. Quite to the contrary, we need to switch to a mode where we accept that we have to recognise the need for transformational change, and that we can use evaluation to highlight its importance. This is relevant not only in the economically poorer countries and regions in the Global South where the need for transformation is obvious, but also in the economically rich countries in the Global North that have to make significant changes to production and consumption patterns and systems in order to ensure that they stop going beyond what our planetary boundaries can tolerate. In theory at least, we are in this together.

What does this mean for our practice?

Do we agree that these two ideas have to “die”? If so, we will also have to accept that we have to Do Evaluation Differently, and that our global evaluation system – by that I mean the evaluation architecture, institutions, values, relationships, protocols, etc. within which we all operate – should adjust accordingly. This is difficult, as those who pay for, commission and do evaluation often have entrenched interests and also might have good reasons for not changing, such as resource and capacity constraints. And since our evaluation system – like almost everything in life – rests largely on power relations and hence on the maintenance or rebalancing of power asymmetries, shifts cannot happen easily without the collaboration of those “points of power” who determine and influence the system.

This is why the DAC evaluation criteria discussions are so important. They are a key part of conventional evaluation practice, in particular in the Global South among aid-dependent countries. Our governments readily follow. In the DAC criteria revision effort it will be easy to suggest (small) adjustments to the existing criteria. Instead, we have to start by asking fundamentally what justifies, frames and drives their use, as I highlighted in my blog post 3 in this series.

In my next post I will demonstrate the importance of totally rethinking what frames our evaluation criteria by using one potential criterion that receives no attention because it is difficult to conceptualise, namely Significance.

In the meantime, join the thinking about which ideas in evaluation convention you would like to see disappear – and why.

Share this Article

Zenda Ofir

Zenda Ofir is an independent South African evaluator. Based near Geneva, she works across Africa and Asia. A former AfrEA President, IOCE Vice-President and AEA Board member, she is at present IDEAS Vice-President, Lead Steward of the SDG Transformations Forum A&E Working Group and Honorary Professor at Stellenbosch University.

One comment

  1. Hi Zenda,

    I love the title (and content!) of this blog post. It is the sort of thinking that should drive us all – thinking beyond the status quo and examining our reality everyday.

Leave a Reply

Your email address will not be published. Required fields are marked *