Given my 17-year long experience in evaluation, what are the top tips I would give to Young and Emerging Evaluators – the YEEs who will be responsible in future for the commissioning and doing of evaluations? We need change in our evaluation systems, so here are my first five priorities among my “Top Ten Tips”.
I hope to have 20 experienced evaluators from around the world sharing their “TTTs” in the coming months. And also the not-so-experienced, given that we can all learn from one another.
Top Tip 1. Open your mind. Read. Evaluation specialists have to understand matters beyond the obvious. Make time to read, to broaden your horizons. It will help you to be innovative, to lead in forefront thinking, and to get you ahead of everyone. Read about the world, about the context in which you work and live, and about cutting edge developments and trends in both development and evaluation that interest you.
For development and evaluation, read books, blogs and working papers; they help you to stay up to date with current developments. And do stay up to date while also striving to contribute new ideas to evaluation theory and practice.
To understand more about our world, read in-depth analyses across many different perspectives. And rather study the work of investigative journalists if you are interested in news. Avoid newspapers and social media news streams. They are either propaganda (also in those democracies where most outlets are owned by a few massive corporations), manipulated to reinforce our own narrow interests or beliefs, or too superficial to be useful. The truth is far more layered and nuanced. We have to develop continuously our ability to see and understand beyond the obvious.
Top Tip 2. Be mindful and explicit about what frames and shapes your evaluative judgments. Our judgments tend to be determined by
One, our own values, beliefs and worldviews about what is “good”, “success”, “progress”, “development” and so on;
Two, convention about what is “good”, success”, etc. in a particular context or across contexts;
Three, the perspectives of those we consider and treat as “stakeholders” or “shareholders”, frequently privileging evaluation commissioners (because we have to); and/or
Four, external standards or performance demands such as those articulated in international conventions (for example the Paris Agreement or ILO Conventions), national development plans and global development indexes.
Yet we are seldom thoughtful and explicit about what underlies our judgments. We should be, or our judgments will mislead action.
This becomes even more essential when we deal with different value systems and cultures within or across societies, especially around sensitive issues such as human rights, power relations between groups, or notions of what is, and what makes for “success” and “development”. This has been well covered in the evaluation literature (see for example here, here and here) but has not been well applied and, astoundingly, is seldom a focus for evaluation commissioners. Being more aware of what shapes our evaluative judgments is also becoming increasingly important as new ways of conceptualising and doing development emerge in countries that have been developing successfully, yet do not adhere to dominant narratives and convention about political or development models, such as China, Singapore, Viet Nam and Rwanda.
One of the ways we can make the basis for our judgment clearer is through the use of rubrics, but we need to do more, and more deliberately.
Top Tip 3. Be open to what constitutes “credible evidence”. As a natural scientist who moved into evaluation I was at the time very surprised by the paradigm wars that have troubled evaluation for so long in its apparent search for ‘credible evidence’. Fortunately we now generally recognise the merit of mixed methods designs. But between 2007-2009 I was also involved in the efforts of NONIE, AfrEA and others who were trying to minimise the damage done in the Global South through the robust propagation of the notion that there was a “hierarchy of evidence” – a notion that reared its head when a group of development economists very successfully sold randomised control trials (RCTs) as the “gold standard” of “rigorous evaluation” (ironic, given the history of the gold standard).
Many, many articles have been written and presentations made over the past decade about the inappropriateness of this type of thinking about evidence - usually by renowned evaluation specialists such as Michael Quinn Patton, Michael Scriven, Elliot Stern and Robert Picciotto, and later by equally renowned economists such as Lant Pritchett, Michael Woolcock and Nobel Prize winner Angus Deaton (here and here). The most significant recent development around this issue is the article a week ago in the Guardian endorsed by 15 well-known economists, including three Nobel Prize winners, expressing their grave concern about the privileging of this one type of methodology over others in the international aid system.
Much energy and resources have been wasted by parochial standpoints, yet good has also come out of these debates. Let us now agree that credible and useful evidence can be generated in many different ways depending on circumstance and capability, and on the quality standards in that field.
Top Tip 4. Focus a good part of your evaluative activities on “understanding”. Evaluation is not about “what works”. It is about trying to understand – to the extent possible within a systems view of development or humanitarian aid - what works, why, how, when and over what period, for whom, for what purpose, under what circumstances and value systems, to what scope, at what cost (tangible and intangible), under what trade-offs, and more. Then judgments about merit, worth and so on can be made, and useful learning can take place.
More than that: “success” as conventionally defined in a development context can be very misleading. It is essential that we are more mindful about what it actually is and could be in a specific context. It is not necessarily about whether (i) objectives have been met (the objectives could have been inappropriate in the first place); or (ii) stakeholder expectations were met (they might not have a sustainable long-term vision of development); or (iii) compliance with a work plan (the work plan does not guarantee good outcomes); or (iv) a satisfactory rating for some (DAC) criteria (the sum of the performance might not add up to “development”); or (v) positive outcomes (they might unknowingly be neutralised by negative outcomes, or disappear within a few months without further positive ripples); or (vi) a small percentage improvement in a key indicator (which might not justify the effort or cost involved, and there might be other more effective interventions).
The point is that understanding situations, systems and change (and its ramifications) - to the extent possible under uncertain and changing circumstances - has to be a focus if evaluation is to be truly useful. This is why I tend to focus a major part of every one of my evaluations on the following:
One, testing the main assumptions that underlie how and why designers thought change would happen in a certain way, compared to what appears to be taking place;
Two, working out the value proposition of a certain effort based on the perspectives of a variety of stake- or shareholders; and
Three, identifying the influences that led or appear to be leading to success, slow progress or failure in some aspect, or in contributions to development.
Four, consolidating the apparent success factors – the combination of factors that have proven to be necessary and/or sufficient for success - and their relationship with one another and with a particular context.
Get these aspects into TORs. They are meaningful and useful, even though we acknowledge that as we engage with “complexity” the use of such insights becomes somewhat more complicated (a topic for another post).
Top Tip 5. Be or become a systems thinker who can also deal with some complexity concepts. Develop your capacities in this regard and use it in your work - whether you are dealing at local level with a small community or are working in the context of large systems change. It is a fallacy that one can be a good evaluator without a systems view of life and without dealing at least sometimes with complexity. The interconnectedness of things leads to all sorts of interesting behaviours. This is a basic fact of life as we understand it at present - and well highlighted by the 2030 Agenda and its SDGs.
How then can we ignore this simple insight when we evaluate, or commission evaluations? How stupid! And we should treat this as foundational in evaluation courses. Yes, it demands more from us because we now have to get serious about context, co-evolution, negative impacts, trade-offs, synergies, accelerators (catalysts), leverage points, attractors, tipping points, adaptation, and more. We don’t yet quite know how to deal with all of this from an evaluation perspective, and the politics of it in the international development environment are still a major challenge. But sound evaluation practice that will serve the challenges and opportunities we face today has to be rooted in this way of working. Much has already been written for evaluation from this perspective. Get good sources. Read, read, read and apply, even if only basic ideas in small steps.
My next five priority messages for YEEs will follow in a next post. But what lessons would you like to share with our next generation of evaluation specialists?
Zenda Ofir is an independent South African evaluator at present based near Geneva. She works primarily in Africa and Asia, and advises organisations around the world. She is a former AfrEA President, IOCE and IDEAS Vice-President, AEA Board member, Honorary Professor at Stellenbosch University, Richard von Weizsäcker Fellow, and at present Interim Council Chair of the new International Evaluation Academy.