Zenda’s Top Ten Tips for YEEs – 2

**Ten minute read**

While the five ‘Top Ten Tips’ in my previous post for Young and Emerging Evaluators (YEEs) related more to overall approaches to evaluation practice, the next five are intended to zoom in on a few technical matters that, as I advise people and organisations around the world, I see repeatedly ignored or inappropriately treated – even in some of the most thoughtfully designed evaluations. They are mostly relevant for the evaluation of projects and programmes – the starting point for most new evaluators.

These days there is the unfortunate tendency among evaluation commissioners (yet understandable in an era of sound bytes and tsunamis of information) to demand shorter and shorter reports. Around 20-50 pages seem to be the norm, even for complicated cross-boundary evaluations that have to respond to multiple evaluation questions (the record number of evaluation questions my team was presented with in a TOR was 68!). This means that we often cannot do justice to (i) displaying the evidence and reasoning underlying our findings, and (ii) providing sufficient information about methodology to be able to assess the quality of the report, unless placed in an annex where it immediately disappears from sight, never to be considered by anyone.

At present evaluation practice appears to be obsessed with getting evaluations used, but if the quality is poor in the first place, promoting their use is destructive. This trend towards shorter reports can be highly detrimental to evaluation quality if we do not do it thoughtfully, with some standard requirements that try to address the resulting weaknesses.

This is of course an issue for evaluation commissioners, who carry great responsibility for either strengthening or weakening our craft. But as a YEE you can also do a lot to enhance the credibility of our evaluation practice. So please consider addressing these simple issues noted below in your work – or let me know if you do not agree with them.

Also make an effort to engage with evaluation guidelines or standards that can help ensure good practice, and use websites such as the excellent BetterEvaluation initiated years ago through a collaboration between renowned evaluation specialists Patricia Rogers and Nancy MacPherson. You will not find a better resource on how to approach evaluation and engage in practice with issues that matter than this website.

Top Tip 6. Make sure your findings – or at the very least your conclusions – are actually evaluative, and use rubrics to help you do that. We are not (only) researchers. We are evaluators who use research information and research methods in our practice. This means we have to have the evidence, means and expertise – beyond technical expertise, also excellent analytical skills, the ability to integrate, empathy with people, nature and the planet, political savvy and good intuition or “gut feeling” – to make sound judgments that are constructive and immediately useful for multiple purposes. We have to go beyond describing what we have found, why and how, and its meaning within a certain hypothesis – which is what researchers do – to ensuring that we give, or help others to get, a clear indication of the worth, significance or merit of the issue addressed in each finding or set of findings, or at least in the conclusions. This has to make such assessments immediately useful for learning and action among the intended audiences of the evaluation.  This is why values are so important in evaluation, as I also noted in my first post in this series.

So, finding that something has improved by, say, 5 percent is essentially meaningless from an evaluative point of view. How ‘good’ or ‘successful’ is this in that specific context, given the purpose of the evaluation as reflected through the evaluation questions and criteria; the tangible and intangible costs and the trade-offs that had to be made; and/or the contribution to ‘desired changes’, ‘development’ or ‘humanitarian support’? Is it justified to feel good about this result? If so why or if not, why not?

Or finding that many projects or partners have contributed to a specific result. How ‘good’ is that and on what basis, and with what set of values or principles, given the context(s) in which you work, do you determine this?

It also does not really help much to say something showed “significant improvement”. What does this mean? So make frequent use of evaluation rubrics to make clear what you mean by your assessment. See also materials on rubrics here (a more extensive description), and a set of illustrative evaluation rubrics that Thomas Schwandt and I developed some years ago for the IDRC for their evaluation of research quality, and which is now in quite wide use (and recently addressed in Nature).

Top Tip 7. Focus realistically and explicitly on limiting bias in your sampling strategy within the limitations posed by the real world. Things in evaluation usually do not work in practice as (well as) you are taught during your studies. Reality intervenes! This becomes very clear when you try to limit bias – and there are many types of biases (see also here, here and here, with the latter interesting from a teaching perspective), including a depressing number of cognitive biases. Among others, don’t let anyone tell you that surveys, or designs primarily based on lengthy closed-question surveys, are “objective”, “rigorous” and so on, (more than other methods). Below the surface often lie important problems, with the value systems and assumptions of the person(s) who set the questions often the most overlooked source of bias.

We can never eliminate bias. What we do is an integral, indivisible part of the world and people or societies are not laboratories (I am a chemistry PhD and converted social scientist, so I know the difference). Here I only want to alert you to the simple and rather obvious matter of doing your best to limit bias when designing purposive sampling strategies, including through systematic triangulation.

So – in a world where qualitative information is becoming increasingly essential to good ‘understanding-orientated’ evaluation, I never want to see an evaluation report again that just lists the “people interviewed” (I have been guilty of that myself, so I am not pointing fingers!). Give a stakeholder map, describe your strategy and categorise those interviewed so that we can see exactly who you reached and why – and why not others. This is essential. Evaluation or review teams (for scientific research) often fail to use a well-designed stakeholder map to devise their sampling strategy, and interview those most readily accessible in the week they spend in the field. This is one of the most important reasons for over-claiming credit for results or contributions to results.

Also do not forget to turn to ‘big data’ if possible, to see if that can assist you in getting to more representative results, BUT take care to have a full understanding of its limitations!

And in tandem, give me a good description of the extent to which your systematic triangulation strategy could be executed. How well could you verify initial evidence underlying critical findings? How did you approach it beyond the usual rhetoric of “we triangulated between methods, sources and/or analysts”. Clever triangulation is exceedingly important; ensure you get the skill to do it well within practical constraints, and take it into consideration when you could not do it sufficiently well – often the case within real-world constraints.

The important point is, if we don’t do the above we easily fall into the trap of overclaiming results.

Top Tip 8. Never, ever make any claim about successful “outcomes” or “impact” without fully and systematically considering (i) negative outcomes/impacts, and (ii) issues of sustainability. It is disingenuous to look only for the intended or expected changes in society resulting from a certain intervention. Given that we have to have a systems view of the world, we have to recognise that unintended (sometimes intended) negative consequences of a particular action, or negative outcomes or impacts resulting from an intervention, can severely affect and even neutralise (to use a chemistry term) positive results. Renowned evaluation specialists like Jonny Morell and Michael Bamberger and colleagues explain this very well. I consider evaluation commissioners who do not provide for, or actually, demand sufficient attention to this aspect; as rather poor at their charge, which is to encourage and ensure useful and credible evaluations that truly support development or humanitarian work (and even I have not always paid attention to this aspect with sufficient attention and rigor).

Furthermore, those of us who have spent years evaluating ‘projects’ or ‘programmes’ with good intentions cannot help but be very cynical about how there is no trace of supposed positive results – even of positive ripple effects – a few months or year after funding has ended. The development world is littered with such efforts. Inform yourself and help prevent this from happening by (i) including a focus on understanding what makes for sustained positive impact in a certain society, (ii) making sure you understand any negative consequences/outcomes/impacts as well as positive ones, and (iii) supporting efforts to do evaluations a while after an intervention has been concluded. For the latter, see this important focus on SEIE, which has been a particular passion of Jindra Cekan at Valuing Voices and a few others – but still far too few to make a significant difference to practice. And in spite of the DAC criterion of “Sustainability”, many still need to learn how to evaluate plans and implementation efforts effectively for sustainability, in particular from a complex systems perspective.

(By the way, the economics term “externalities” is often used in this context; for a systems-oriented evaluator this term is quite inappropriate.)

Top Tip 9. Also attend to what lies beyond preordained or obvious boundaries, focusing on coherence and synergy. One of the biggest mistakes we all make in practice is to consider a project or programme in isolation of what happens around it. This, in spite of the fact that already for a while the notion of Policy Coherence for Development (PCD) has been seen as an important focus for evaluation. But many even experienced evaluators fail to pay sufficient attention to this, and in many cases the notion of “coherence” is also too limited. For me, what stands out is that we should interrogate for example

one, whether the policy environment is truly supportive of the intervention, or runs the risk that lack of alignment causes (or might cause) cause obstacles, bottlenecks or delays;

two, whether other ongoing interventions in the same field and geographic area provide the means or opportunity to strengthen (or lower) the impact of the intervention under evaluation – the so-called leveraging or synergistic effect (as chemist I love this concept, which is also crucial to recognise when dealing with complex adaptive systems behaviours); and

three, whether the intervention is timely, with a notion that there is enough in place around it at that time, or with the possibility to co-evolve with it, to make it work – whether capacities, resources, incentives or motivations, community structures and so on.

Lots of efforts go to waste with insufficient attention to these efforts – unfortunately perpetuated by lack of coordination between aid agencies, or government departments, or government initiatives with civil society, and so on. We all know of ludicrous examples where communities were encouraged to use or develop products without accessible, viable, necessary or sustainable markets markets or other systems that they need to succeed (Do Bill Gates and chickens come to mind? If not, please read this brilliant post by Joseph Hanlon and Teresa Smart); where intellectual property and innovation policies literally oppose one another; or where different agencies all work in a region on, say, migration, without building on the efforts of the other to make the whole more than the sum of the parts (I had an excellent example of this in a region in Asia last year).

If we do not attend at least to some extent to such issues, we will not get the maximum out of opportunities presented by interventions – and this is an increasing problem when we consider the urgency with which change, and in particular transformational change, should happen.

Top Tip 10. Care. Very much. About the people, cultures, countries, ecosystems and planet in and among which you work. About the quality, credibility and legitimacy of your work. About whether you make a difference – and whether the difference is a good one. Evaluation is one of the few practices and professions explicitly aimed at doing good, at making the world a better place, and that can actually do so. See evaluation as something that can be very worthwhile and necessary in our world today – and work accordingly.

What do you think of these Top Tips of mine? What are yours? Let us know!

Share this Article

Zenda Ofir

Zenda Ofir is an independent South African evaluator. Based near Geneva, she works across Africa and Asia. A former AfrEA President, IOCE Vice-President and AEA Board member, she is at present IDEAS Vice-President, Lead Steward of the SDG Transformations Forum A&E Working Group and Honorary Professor at Stellenbosch University.

Leave a Reply

Your email address will not be published. Required fields are marked *