Updating the DAC Evaluation Criteria, Part 3. What should determine our criteria?

It will be unwise only to ‘fix’ in an ad hoc manner the weaknesses in the composition and application of the current set of DAC criteria. We need to consider carefully on what conceptual basis we select and define our criteria for the evaluation of interventions, or portfolios of interventions such as projects, programmes, policies, events, and so on (departing from a system rather than intervention requires a somewhat different approach).

Evaluation criteria can differ from organisation to organisation and from one stakeholder group to others, although the DAC criteria have inadvertently almost become close to becoming a ‘recipe’. In his thoughtful book, Evaluation Foundations Revisited, Thomas Schwandt points out that the differences in evaluation criteria can be based on (i) stakeholders’ perspectives of the importance of a given social problem; (ii) stakeholders’ vested interests in solutions to these problems; (iii) the norms and values of the organisational and political system in which the intervention is developed, located and administered; (iv) cultural understandings, and (v) evaluators’ own values and perspectives on criteria they believe are important.

I have an additional argument: For evaluation to address development effectively – and sustainable development even more so – there are certain imperatives that have to be met in terms of the evaluation criteria that we use. In other words, if such a set of ‘imperative’ criteria is not used to guide an evaluation, we might in actual fact not be evaluating – at least not with sufficient credibility – contributions to sustainable development or to positive development trajectories at national or regional level.

So, from the perspective that our evaluation criteria have to reflect the very nature of development as well as stakeholder interests, I propose that the identification and prioritisation of our evaluation criteria should be based on the following three principles:

  1. Development as complex adaptive system (CAS).  Development viewed as complex adaptive system demands that our criteria help us to reflect this when we evaluate in that context. This means that we have a subset of evaluation criteria that is imperative to include in such an evaluation – otherwise an important part of what makes for development might be neglected. The following criteria can arguably be considered for such a critical subset.DAC Evaluation Criteria
  2. Important organisational, societal and/or global norms and mandates. These are flexible and will change from time to time. At least some of the most important current global or societal norms or expectations will have to be included (for example, resulting from the 2030 Agenda, the Paris Agreement or Africa’s Agenda 2063), depending of course on stakeholder mandates and priorities.DAC Evaluation Criteria
  3. Additional stakeholder interests and concerns. These are flexible and may vary from evaluation to evaluation, and from context to context. None of these are imperatives that have to be addressed during an evaluation, but they are “good to haves” that will add value from an evaluation use perspective: DAC Evaluation Criteria

Does this argument and the three principles with examples of relevant evaluation criteria resonate with you?

In an upcoming post I will detail the rationale and proposed criteria in each case.

Share this Article

Zenda Ofir

Zenda Ofir is a South African evaluation specialist currently based in Switzerland. She is a former AfrEA President, IOCE Vice-President and AEA Board member. She has worked in around 40 countries, primarily in Africa and Asia, and provides evaluation advice to many multilateral and international organisations.

4 Comments

  1. Hi Zenda,

    Thank you for starting this important conversation. It is overdue. For me the main issue in dealing with the DAC criteria is that they become standard and/or mechanical and thus become a cookie cutter for those commissioning the evaluation. Similarly, trying to standardize the variables or factors that lead to success can reduce the effort required for understanding the context within which success occurs. How do we understand the effects of developmental stages on adaptive systems, cultural values, use of resources etc. Do we determine merit or worth within particular contexts or is it determined by global standards and or norms? Simarly, how does issues of capacity and capability affect our understanding of results, accomplishments, effectiveness? Are capacity and capability a component of sustainability or are they a determinate of sustainability?

    Your starting of a dialog is great–keep it up

    Charles
    Universalia Management Group

  2. Charles, you raise critically important issues. Caroline Heider, Bob Picciotto and others have all stimulated our thinking – good to have your insights as well. My concern too is that any set of criteria accepted as “the” set prevents us from thinking deeply enough about what else we need to understand and assess. But a set of criteria also directs us to what is imperative to consider. So it is a good time bring together and interrogate what we have learned over the last 15 years that can help us shape either an improved set of criteria, or a set of evaluation questions that are imperative for a development context, or both. What we cannot continue to tolerate is the omission of critical issues such as coherence, negative impacts and sustainability.

  3. Dear Zenda,

    Congratulation for this important conversation. But one issue with some development project is that when these projects are implementing, some persons not stakeholders works against the success of the implementation of the projects. Many times for political reasons or socio cultural reasons. It is often the case in developing context.
    How can we consider this issue in evaluation?
    Best,
    SAKO G. Oumar

    • Oumar, you highlight why it is so important to ensure that we understand the reasons for success or failure in achieving desirable impacts. Irrespective of which evaluation criteria we use to guide the evaluation, we have to ask questions that in each case give us enough qualitative information to understand the “why” of the situation – otherwise we cannot make a sound judgment, nor learn from what has taken place. It is also one of the reasons why some of us have over the years agitated against impact evaluations that did not study or consider implementation failures or inappropriate designs as reasons for weak or no positive impact. And of course, we cannot improve ongoing implementation without such information.

Leave a Comment

Your email address will not be published. Required fields are marked *