The DAC criteria, Part 6. Key issues before deciding

Share this post

This final post about the DAC criteria captures key points for consideration in the evaluation for development space. It is also  relevant for the interface between humanitarian work and development – an increasingly important area of work.

Refined DAC criteria, or something completely new?

This question needs careful consideration. For me five issues stand out that have to be vigorously debated before any final decision is made:

First, we should not lose the value they have added, and can add to evaluation. As Bob Picciotto pointed out in a comment on Part 5 of this series, discarding the criteria totally might give up hard-won recognition of the importance of such criteria for development evaluation, and encourage poorer quality, less appropriate evaluations. I also noted in Part 1 of this series that the DAC criteria have made several other important contributions to evaluation practice. They have definitely drawn attention to five key aspects that are important when evaluating for development – what Osvaldo Feinstein called their “signalling value”. I don’t think there would have been much emphasis on negative impacts or sustainability without them (although ironically these two criteria are also some of the least well applied). These arguments make their revision or refinement attractive; we may lose too much by discarding them.

Second, we do not have enough systematic evidence of their utility and significance. On the other hand, Bob Williams shares my own belief that the DAC criteria have made many of us lazy thinkers about the identification and utility of criteria. He is also correct in questioning in the absence of solid evidence whether they actually have been useful and valuable enough to justify their continued existence. Ideally, we need systematic, nuanced and unbiased evidence to substantiate their value, justify any revision effort, and help ensure that we address current weaknesses.

Third, they are more, and less useful for different types of evaluands and purposes. There is a myth that the set of DAC criteria is relevant only for “interventions” - policies, programmes and projects. This would be problematic, especially as the demands of the SDGs are accelerating the move to other types of evaluands. But the DAC criteria can also, with some changes in nuance, be used selectively across quite a number of evaluands, and for different purposes (assessment of design, implementation, results, etc.). Many of us already years ago moved far beyond the evaluation of “interventions”. I have personally often used the DAC criteria across other types of evaluands, working with good evaluation commissioners to make sure they were necessary, or were appropriately refined - and very often complemented by other questions and criteria. I have used them, or seen them used in some or other form in the evaluation of thematic areas, institutional systems, portfolios of interventions, global conventions, knowledge products, advocacy efforts, peace processes, relationships (partnerships, networks, etc.) and more. The most recent example: I am currently an international advisor for the evaluation of IFAD’s financial architecture where four of the five DAC criteria were usefully applied. Yet it is also clear that they are not sufficient or even appropriate for all types of evaluands, and certainly cannot be blindly applied.

Fourth, we may need something completely new. Michael Quinn Patton’s recent work on Principles-Focused Evaluation provides an alternative approach that could allow us to view criteria from a different angle, and make the use of a specific set undesirable. We have to consider new ways of looking at our work that will make it more meaningful and useful.

Fifth, we need criteria (or something) that compel us to account for the complex systems nature of development. I disagree with those who say that this is not an important issue in this context; I believe it is vital to our thinking about evaluation questions and criteria. Before anyone argues to the contrary, I suggest first reading the book Aid on the Edge of Chaos by Ben Ramalingam. What he espouses is not only relevant in an aid context, but for any development planning and execution efforts – and hence for the evaluation of plans, implementation and results that are supposed to ensure that a societal grouping, country, region or the planet as a whole has a positive development trajectory.

In any case this is a book that should be read by any evaluator with any interest in working in the Global South, where all the LICs and LMICs (and most HMICs) are located, and where the work of both development planners and evaluators is extraordinarily more challenging - and highly dependent on understanding the implications of development viewed through a complex systems lens.

Back to development and evaluation viewed from a complex systems perspective

If we evaluate for development, we have to acknowledge the reality and hence implications of the ‘interconnectedness of things’, and the interactions between goals, interventions, actions, events, etc., as discussed in my post and article here and here - and of course, confirmed by the 2030 Agenda, very well illustrated in the excellent papers published on the topic by ICSU.

From this perspective, and contrary to convention, I cannot see how we can leave the questions and criteria to be determined by stakeholders alone. I do not have reason to think that stakeholders will always be interested in crucial aspects of ‘development’ that have to be assessed if we are to understand success, or progress towards success. Similarly, I do not think that evaluators and evaluation commissioners will have the power or inclination to ensure consistently that negotiations around questions and criteria focus on such issues.

It is time that we consider this line of argument very carefully when we think about evaluation for development.

This has important implications when evaluating design, implementation and results. It is not enough just to focus on adaptive management and experimentation. We have focus on the quality and extent of improvisation. We also have to assess development plans, implementation and results to determine if there is or has been sufficient coherence, complementarity and alignment. We have to consider synergistic effects. We have to search for possible negative consequences, impacts or trade-offs that may deplete, or might have depleted positive impacts. We have to understand more about catalytic effects, tipping points, transformative change, and what is essential to know when we try to ‘scale’. We have to get to grips with the important connection between impact and its sustainability. We have figure out if something has been done to release the energy in society, or in a certain grouping. We have to understand more about the patterns in societies and ecosystems so that we can understand more about important dispositions, and predispositions to change. We have to think about trajectories from a systems perspective, not just about snapshots in time.

And even if we cannot yet cope with all these issues, we have to work towards being able to do so. This has important implications for our evaluation questions and criteria. And this is exactly where we will see the extent of value added as well as the current limits of our practice.

Proposals and dilemmas

In parts 3 to 5 – and especially part 5 of this series - I proposed that if we want to evaluate more effectively for development, we have to thoughtfully tailor our questions per evaluation and refine our criteria by drawing from three categories of criteria, of which one is not negotiable - our ‘core’ set - and another only partly negotiable.

DAC Part 6

Please click on the image to view the larger version.

With slight adjustments in definitions and descriptions, most of the criteria mentioned here can be applied across diverse types of evaluations and evaluands. Rubrics need to make our values and yardsticks explicit.

We need to do much more to shift our evaluation practice back from our obsession with impact in isolation of everything else, to evaluating much more smartly for appropriate design and implementation, and pathways to success.

We face several dilemmas. Of course, we cannot work with so many criteria. We have to prioritise. We have too few resources for comprehensive evaluations. We fear that things will get too complicated for implementers and evaluators; many also fear any notion of ‘complexity’. We tend to have a firm belief in the wisdom of ‘stakeholders’. We face methodological constraints. And there is inertia in the system that comes with comfort zones.

We also need a credible process in place for any review effort. Much has changed in the last 20 years since the DAC criteria became widely used. The 2030 Agenda concedes that ‘development’ is now necessary in all countries. The community of evaluators has gone global. Geopolitical and economic power has shifted, and the Global South wants a strong, equal voice in matters that matter, including in processes that will influence the global evaluation system. We still have to figure out what this means for any criteria reform process. Or even whether there should be such a process.

With thanks to visionary colleagues

I want to thank two very committed and visionary colleagues for setting in motion our rethinking of the DAC criteria:

Caroline Heider, the Director-General of the Independent Evaluation Group of the World Bank, started this discussion with her insightful engagement with the DAC criteria.

I am particularly indebted to Indran Naidoo, Director of the Independent Evaluation Office of UNDP, who commissioned and inspired the work that led to a paper and this series of posts as part of his ongoing efforts to improve the quality of UNDP’s evaluation function.

I believe that the debates around the DAC evaluation criteria are part of what we should be thinking about in Doing Evaluation Differently. Exciting times are ahead: no doubt rethinking and at least in some aspects, reshaping our evaluation practice will gather momentum in 2018.

Share this post

8 thoughts on “The DAC criteria, Part 6. Key issues before deciding”

  1. I am a fond reader of this series and of todays’ I liked the ‘implication’ paragraph as it shows what we have to think of and act upon (e.g. scalability factors which is a special interest of mine).

    1. John
      Thank you for highlighting the implications. I do believe we need to continue to consider quite deeply what insights from complex systems thinking mean for our practice – and many are beginning to do just that. From this perspective, we are in an exciting era for evaluation. There is also increasing attention on scaling in particular, but I don’t think enough of the work is thoughtful enough about what complex systems means for this concept.

  2. Michael Quinn Patton

    Zenda, Thank you for pulling together your many years of experience and global evaluation engagements to illuminate the path forward for the profession internationally. I join you in believing the DAC criteria are ripe for revision along the lines you’ve indicated. I know that I will be referencing your analysis and ideas in my own work and writings going forward. Kudos and cheers.

  3. Dear Zenda,
    Many thanks for the brilliant piece of work. I’ve followed the discussion closely. I am particularly interested in putting to use the non-negotiable criteria to evaluate some of our projects in Nigeria alongside the current DAC criteria.

    Will definitely revert with a feedback.

    Thanks once again for your time.


    1. Thank you Yinka. I am pleased to know that you are thinking of applying additional criteria. Some of these might not be easy to apply, but we need to experiment to see what can take us forward. Please stay in touch so that we can learn from your experience.

  4. Last year at the annual conference of the American Evaluation Association, I was part of a panel presentation entitled, “Beyond the DAC-gnificent evaluation criteria: From learning to action through methodological innovation.” Hence, it is inspiring to catch-up on this thought-provoking, and timely blog series. Kudos to Zenda for not only bringing together some well-articulate points of view, but also an online community of people and opinions to unpack them them.

    In this concluding post, it is especially encouraging to read the attention to a complex systems perspective. Evaluators need to remain flexible and adaptable to the changing contexts we operate in (over time, place and people); this is walking the “complexity talk.” I recall encountering this early in my career as an evaluator, when researching evaluation criteria for the International Federation of Red Cross and Red Crescent Societies (IFRC). As referenced in an earlier blog, I encountered how ALNAP adapted the DAC criteria to include principles such as coverage and coherence in humanitarian response.

    As noted, there is much value to the DAC criteria, and I believe their intuitive configuration lends to their popularity not only for development evaluation, but other evaluands. Indeed, we should not “throw the baby out with the bath water.” However, as Zenda (and others) remind us, we should remain vigilant and not fall into using any set recipe of evaluation criteria mechanically.

    Evaluation criteria should be tools that help us assess interventions, rather than straightjackets that confine probing and questioning of their merit and worth. Like any tool, its utility depends not only on how it is wielded, but selecting the right one for the job. With this in mind, I support consideration to a larger “toolbox,” with a wider menu of options to adapt and get the job done according to context and need.

    Once again, thanks to Zenda for taking us along on this “deeper dive” into a particular timely and relevant topic.

Leave a Comment

Your email address will not be published.

Scroll to Top