Zenda’s Top Ten Tips for YEEs – 1

Given my 17-year long experience in evaluation, what are the top tips I would give to Young and Emerging Evaluators – the YEEs who will be responsible in future for the commissioning and doing of evaluations? We need change in our evaluation systems, so here are my first five priorities among my “Top Ten Tips”.

I hope to have 20 experienced evaluators from around the world sharing their “TTTs” in the coming months. And also the not-so-experienced, given that we can all learn from one another.

Top tipsTop Tip 1. Open your mind. Read. Evaluation specialists have to understand matters beyond the obvious. Make time to read, to broaden your horizons. It will help you to be innovative, to lead in forefront thinking, and to get you ahead of everyone. Read about the world, about the context in which you work and live, and about cutting edge developments and trends in both development and evaluation that interest you.

For development and evaluation, read books, blogs and working papers; they help you to stay up to date with current developments. And do stay up to date while also striving to contribute new ideas to evaluation theory and practice.

To understand more about our world, read in-depth analyses across many different perspectives. And rather study the work of investigative journalists if you are interested in news. Avoid newspapers and social media news streams. They are either propaganda (also in those democracies where most outlets are owned by a few massive corporations), manipulated to reinforce our own narrow interests or beliefs, or too superficial to be useful. The truth is far more layered and nuanced. We have to develop continuously our ability to see and understand beyond the obvious.

Top Tip 2. Be mindful and explicit about what frames and shapes your evaluative judgments. Our judgments tend to be determined by

One, our own values, beliefs and worldviews about what is “good”, “success”, “progress”, “development” and so on;

Two, convention about what is “good”, success”, etc. in a particular context or across contexts;

Three, the perspectives of those we consider and treat as “stakeholders” or “shareholders”, frequently privileging evaluation commissioners (because we have to); and/or

Four, external standards or performance demands such as those articulated in international conventions (for example the Paris Agreement or ILO Conventions), national development plans and global development indexes.

Yet we are seldom thoughtful and explicit about what underlies our judgments. We should be, or our judgments will mislead action.

This becomes even more essential when we deal with different value systems and cultures within or across societies, especially around sensitive issues such as human rights, power relations between groups, or notions of what is, and what makes for “success” and “development”. This has been well covered in the evaluation literature (see for example here, here and here) but has not been well applied and, astoundingly, is seldom a focus for evaluation commissioners. Being more aware of what shapes our evaluative judgments is also becoming increasingly important as new ways of conceptualising and doing development emerge in countries that have been developing successfully, yet do not adhere to dominant narratives and convention about political or development models, such as China, Singapore, Viet Nam and Rwanda.

One of the ways we can make the basis for our judgment clearer is through the use of rubrics, but we need to do more, and more deliberately.

Top Tip 3. Be open to what constitutes “credible evidence”. As a natural scientist who moved into evaluation I was at the time very surprised by the paradigm wars that have troubled evaluation for so long in its apparent search for ‘credible evidence’. Fortunately we now generally recognise the merit of mixed methods designs. But between 2007-2009 I was also involved in the efforts of NONIE, AfrEA and others who were trying to minimise the damage done in the Global South through the robust propagation of the notion that there was a “hierarchy of evidence” – a notion that reared its head when a group of development economists very successfully sold randomised control trials (RCTs) as the “gold standard” of “rigorous evaluation” (ironic, given the history of the gold standard).

Many, many articles have been written and presentations made over the past decade about the inappropriateness of this type of thinking about evidence – usually by renowned evaluation specialists such as Michael Quinn Patton, Michael Scriven, Elliot Stern and Robert Picciotto, and later by equally renowned economists such as Lant Pritchett, Michael Woolcock and Nobel Prize winner Angus Deaton (here and here). The most significant recent development around this issue is the article a week ago in the Guardian endorsed by 15 well-known economists, including three Nobel Prize winners, expressing their grave concern about the privileging of this one type of methodology over others in the international aid system.

Much energy and resources have been wasted by parochial standpoints, yet good has also come out of these debates. Let us now agree that credible and useful evidence can be generated in many different ways depending on circumstance and capability, and on the quality standards in that field.

Top Tip 4. Focus a good part of your evaluative activities on “understanding”. Evaluation is not about “what works”. It is about trying to understand – to the extent possible within a systems view of development or humanitarian aid – what works, why, how, when and over what period, for whom, for what purpose, under what circumstances and value systems, to what scope, at what cost (tangible and intangible), under what trade-offs, and more. Then judgments about merit, worth and so on can be made, and useful learning can take place.

More than that: “success” as conventionally defined in a development context can be very misleading. It is essential that we are more mindful about what it actually is and could be in a specific context. It is not necessarily about whether (i) objectives have been met (the objectives could have been inappropriate in the first place); or (ii) stakeholder expectations were met (they might not have a sustainable long-term vision of development); or (iii) compliance with a work plan (the work plan does not guarantee good outcomes); or (iv) a satisfactory rating for some (DAC) criteria (the sum of the performance might not add up to “development”); or (v) positive outcomes (they might unknowingly be neutralised by negative outcomes, or disappear within a few months without further positive ripples); or (vi) a small percentage improvement in a key indicator (which might not justify the effort or cost involved, and there might be other more effective interventions).

The point is that understanding situations, systems and change (and its ramifications) – to the extent possible under uncertain and changing circumstanceshas to be a focus if evaluation is to be truly useful. This is why I tend to focus a major part of every one of my evaluations on the following:

One, testing the main assumptions that underlie how and why designers thought change would happen in a certain way, compared to what appears to be taking place;

Two, working out the value proposition of a certain effort based on the perspectives of a variety of stake- or shareholders; and

Three, identifying the influences that led or appear to be leading to success, slow progress or failure in some aspect, or in contributions to development.

Four, consolidating the apparent success factors – the combination of factors that have proven to be necessary and/or sufficient for success – and their relationship with one another and with a particular context.

Get these aspects into TORs. They are meaningful and useful, even though we acknowledge that as we engage with “complexity” the use of such insights becomes somewhat more complicated (a topic for another post).

Top Tip 5. Be or become a systems thinker who can also deal with some complexity concepts. Develop your capacities in this regard and use it in your work – whether you are dealing at local level with a small community or are working in the context of large systems change. It is a fallacy that one can be a good evaluator without a systems view of life and without dealing at least sometimes with complexity. The interconnectedness of things leads to all sorts of interesting behaviours. This is a basic fact of life as we understand it at present – and well highlighted by the 2030 Agenda and its SDGs.

How then can we ignore this simple insight when we evaluate, or commission evaluations? How stupid! And we should treat this as foundational in evaluation courses. Yes, it demands more from us because we now have to get serious about context, co-evolution, negative impacts, trade-offs, synergies, accelerators (catalysts), leverage points, attractors, tipping points, adaptation, and more. We don’t yet quite know how to deal with all of this from an evaluation perspective, and the politics of it in the international development environment are still a major challenge. But sound evaluation practice that will serve the challenges and opportunities we face today has to be rooted in this way of working. Much has already been written for evaluation from this perspective. Get good sources. Read, read, read and apply, even if only basic ideas in small steps.

My next five priority messages for YEEs will follow in a next post. But what lessons would you like to share with our next generation of evaluation specialists?

Share this article

Zenda Ofir

Zenda Ofir is an independent South African evaluator. Based near Geneva, she works across Africa and Asia. A former AfrEA President, IOCE Vice-President and AEA Board member, she is at present IDEAS Vice-President, Lead Steward of the SDG Transformations Forum A&E Working Group and Honorary Professor at Stellenbosch University.

9 Comments

  1. Thanks Zenda – as always, it is great to hear your insights. My own comment is related to projects or small efforts that don’t always have budget for an evaluation. Its still worthwhile to do some reflection, even a discussion circle or short interviews, with the funders, organizers, beneficiaries, consultants on what worked well, what could have gone better, suggested adaptations and advice for the next one. The practice of reflection is a window into the more formal evaluative practice and useful in day to day of the work on the ground.

    • Michelle, you point out that small actions count, and we are all in positions to help cultivate evaluative mindsets that can bring multiple benefits even to the smallest initiative. A few simple questions asked and thought through regularly can make a big difference in our societies. Technology is forcing everyone in the opposite direction though. People do not take time to think and reflect. A friend of mine wrote a very interesting book called Post-Zombie-ism ….

  2. Dear Zenda,
    this is great. let me add one thing to YEEs, most think doing evaluation Post graduate or Masters that can fill the gap of the knowledge.What is important part of the future evaluations is
    1. Do away from traditional to new approach through SDGs and beyond DAC critieria
    2. primary and Inclusiveness of beneficiaries at all time.
    3. gain on going Experiences on the subjects through participatory level or nay other level as overall instead cluster wise or qualifications wise.
    4. Evaluation is not a cluster basis or technical basis and if you need expert you can get advise from a technical guy.
    5. YEEs should understand that different between the Research and Evaluation. Research seek to generate new knowledge and evaluation information for decision making.

    • Good observations Isha. And we all need to get much better at explaining and demonstrating the difference between research and evaluation, and between (social) audit and evaluation.

  3. Another rich and valuable post, Zenda! You taught me some years ago on these points, and inspired me to read, focus and think….

    So building on your earlier mentoring work and this current post, here are some of the key readings that helped me and many others to expand awareness and get a grasp of the complexities and richness of the field of evaluation:

    Evaluation Roots: A Wider Perspective of Theorists’ Views and Influences
    https://uk.sagepub.com/en-gb/eur/evaluation-roots/book235731

    Evaluation in Action: Interviews With Expert Evaluators
    https://uk.sagepub.com/en-gb/eur/evaluation-in-action/book229239

    What counts as credible evidence in applied research and evaluation practice?
    http://methods.sagepub.com/book/what-counts-as-credible-evidence-in-applied-research-and-evaluation-practice

    Evaluation in Organizations: A Systematic Approach to Enhancing Learning, Performance, and Change
    https://www.amazon.com/Evaluation-Organizations-Systematic-Enhancing-Performance/dp/0465018661

    Foundations of Program Evaluation: Theories of Practice
    https://uk.sagepub.com/en-gb/eur/foundations-of-program-evaluation/book3133
    (this last one is a dragon to read, but it is rich and rigorous)

  4. Very good resources Marco – great that you have read these books. Evaluation Roots is a very important one, especially since many of our postgraduate courses in the South do not attend to the theories that are the foundations of evaluation as currently conceptualised. I hope YEEs will take note. There are now also increasingly good books on systems and complexity in evaluation, which is the future, yet people find it difficult to engage with in practice.

  5. Fred, you are absolutely right of course. The paper you reference was a very valuable contribution to the discussion, and also stimulated further interest in initiatives such as Made in Africa evaluation and the emerging South to South Evaluation (S2SE) that I write about in other posts on this blog. Thanks for the reminder.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.