These days we talk a lot about “sustainable development”. And about “evaluating impact”. But I continue to maintain that it is useless to evaluate the impacts of an intervention without focusing on whether such impacts have a good chance to sustain.
We need to consider very seriously not only evaluating whether impacts (which can include ideas, models, outcomes, etc.), have been sustained, but how to evaluate whether they have been designed and implemented in a manner that enhances the chance that the impacts will be sustained.
Haiti after Hurricane Matthew
Every development evaluator who has worked on the ground knows that this is a critically neglected aspect of evaluation. We can all tell many stories about the lack of interest, or the state of disrepair, or of despair among people once projects have been terminated, incentives have come to an end, the so-called ‘experts’ have left, or maintenance has proven to be impossible.
I was starkly reminded of this last week when listening to the interior minister of Haiti who had been overseeing the recovery operation after Hurricane Matthew: “Maybe we will be a prosperous country one day, and we can use tents to send our children to summer camp. … But we will never be a nation of tent cities again.” His response was emotional and unrealistic, the result of years of dependence on hastily crafted, unsustainable, uncoordinated, wasted relief efforts that considered neither short-term impacts nor long-term solutions.
The interim president of Haiti echoed: “…What is done this time must be sustainable … if not, Haitians will be left once more with empty water bottles … slums in lieu of repopulated neighbourhoods … food donations in place of better farming.”
Superficial engagement with “sustainability” is not enough
Of course, sustainability is one of the influential OECD DAC evaluation criteria and therefore widely used in terms of reference, but most of the reports I have read barely engage with the concept. At community level, “participatory methods” to establish “ownership” among intended beneficiaries are used as panacea. Among government level interventions, we look for signs that programs continue, get funding, or are absorbed into policies.
All these are fine, but by far not thoughtful enough. And few funders of development track or go back to see after some years whether their interventions sustain in the longer term; where they do, the methods they use are unlikely to tell the full story. See for example the BetterEvaluation blog post written by Jindra Chakan and Laurie Zivetz of ValuingVoices, who are staunch advocates of the concept of sustained and emerging impact evaluation (SEIE).
Key points for a call to action
Over the past year I have repeatedly made the following points in meetings, most recently at the European Evaluation Society (EES) Conference held on 28-30 September 2016 in Maastricht, where I was part of a panel on the topic with Sanjeev Sridarhan, Jindra Chekan and Ian Davies. I repeat the points here, and will address them in follow-up posts over the next days and weeks.
First, this is not only about sustaining benefits and impacts of an intervention. It should be about how to sustain development impact at national or regional level.
Second, if we want to contribute to sustaining impacts, and to positive development trajectories at national level, we need to rethink what we evaluate, and adjust our evaluation questions and criteria accordingly (including the OECD DAC criteria).
Third, in addition to adaptive management and learning from ex-post evaluations, we need to include in our evaluations the extent to which interventions were designed and implemented for sustaining impact.
Fourth, when evaluating, we need to use systems and complexity thinking, coupled to our experience and understanding derived from adaptive management, to help us understand how to identify and assess realised and emergent benefits, outcomes and impacts, and the transformation of one into the other.
Systems and complexity thinking can help us to evaluate for
|i.||We can, to a certain extent, identify preconditions for change and success.|
|ii.||We can work with concepts such as transformative (versus incremental) change, tipping points and reinforcing feedback loops.|
|iii.||Thanks to a very good paper facilitated by ICSU, the International Council for Science (that I had the pleasure to see in draft form), we can work more systematically with the interactions between the SDGs, and between interventions.|
|iv.||We can study synergistic effects within an intervention, and between different interventions.|
|v.||We can study societal patterns that result from the co-evolution of context and culture.|
Fifth, we need to include in our evaluation designs the following elements to ensure that we sufficiently attend to sustainability.
Evaluative questions can help determine whether the intervention design and implementation have been sufficiently cognizant of the need to sustain impact
|i.||Is there an indication that the “energy of society” has been unleashed, and thus the potential for more positive outcomes and impacts over time (than even the change logic or theory of change might have predicted)?|
|ii.||Have alignment, coherence and synergies between goals (and/or interventions) been sufficiently considered?|
|iii.||Have the intervention(s) been designed and implemented for synergetic effects?|
|iv.||Have negative or neutralizing influences or outcomes (‘side effects’) been accounted for?|
|v.||Have preconditions for sustaining impact within this type of context been appropriately considered, based on existing knowledge and experience?|
|vi.||Have important societal patterns that may (i) prevent or (ii) predispose the society towards sustainable change, been sufficiently considered?|
|vii.||Is the intervention being adaptively managed, with sufficient emphasis on continuous learning, improvement and adjustment?|
Of course, these questions reflect a certain ideology about the design and implementation of development interventions, and about sustaining impact. Some might be too difficult for evaluators to address. I will write more about this in follow-up posts.
Zenda Ofir is an independent South African evaluator at present based near Geneva. She works primarily in Africa and Asia, and advises organisations around the world. She is a former AfrEA President, IOCE and IDEAS Vice-President, AEA Board member, Honorary Professor at Stellenbosch University, Richard von Weizsäcker Fellow, and at present Interim Council Chair of the new International Evaluation Academy.