**4 min read**
It is a pleasure to have Ahmed Aboubacrine as guest blogger on the interesting and pertinent issue of how to think about ‘failure’ from different perspectives. Our paths have crossed many times at evaluation conferences and I always appreciated both his input and commitment to high-quality evaluation. He is an experienced evaluator who has worked among others for CARE International and Islamic Development Bank. Here he shares some useful lessons from his own experience. They also resonate well with the engaging book Evaluation Failures edited by Kylie Hutchinson.
Experience of failure is a double-edged sword. On the one hand, one can hide it, blame it on others or just ignore it. On the other hand, it could be transformed into an incredible learning opportunity provided that it is studied, its underlying causes are identified, and more importantly, open discussions are held and corrective actions are taken.
We can have an enabling environment where leadership encourages and rewards learning from failure. Yet the challenge remains how to identify failure timely, or detect early signs of it - especially in development contexts that combine complexity and lack of solid evidence for an evaluation.
To address this challenge and prevent such situations, I want to share here from my experience as well as from the literature four lessons that can be useful for development and evaluation professionals.
Lesson 1: All that glitters is not gold: When assessing development outcomes, failure can look like success, and cognitive bias is often the most difficult to avoid. A number of water wells built and being used by communities may mislead the evaluators to conclude that there has been a huge increase in access to potable water and henceforth a contribution to reducing potential water-borne diseases. However, this actual reality may differ depending on the quality of water and its possible health and social consequences.
Lesson 2: Proper triangulation limits the stakeholders’ influence and opportunity to hide failures: During an evaluation the available data upon which the analysis is built may be misleading. A famous case was Abraham Wald's work on aircraft survivability during World War II. The study aimed at enhancing the US Navy’s aircraft based on the returning planes after missions. Wald studied not only the returning planes as sample, but estimated the whole distribution of damage in order to ensure exhaustive, more accurate decisionmaking.
Similarly, when evaluators collect data, they are not always able to meet all the intended beneficiaries or observe the actual changes due to seasonal factors such as migration, the rainy season or shocks. Data on failure are not always available on the spot. That is why evaluators have to dig deeper and find out about the farmers that migrated or the pregnant girls that are not any more attending schools. The quest for truth has to guide them always to search for any relevant data, information or missing stakeholders.
Lesson 3: Consider carefully the timing of the evaluation: When undertaking an evaluation, choosing the right time matters. One cannot assess outcomes and impact immediately after delivery of project or program outputs. Besides, ascertaining sustainability needs time to ripen the project results and to test the maturity level of the effects and impact.
This is extremely important for infrastructure projects and programs in which results may differ depending on the time of the evaluation. One can observe one of two scenarios: either it is too early and there has not been enough time to mobilise resources for project operation, or the project is working because the installations are brand new and do not face any maintenance problem. Therefore, evaluations gathering data points at multiple times during the lifetime of the project and afterwards may be more accurate than grand one-time evaluation endeavors.
Lesson 4: Creating a learning culture goes beyond gathering knowledge, and requires balancing M&E efforts for analysis: Most organisations produce huge amounts of data and information, generating basic explicit knowledge that can inform an evaluation. But due to resource constraints - time, staff and budget - development evaluators often do not have the luxury to go deeper and uncover more tacit knowledge, intelligence and wisdom to substantiate the available evidence. Hence for better decision-making, resources should be allocated to create a learning culture by investing enough in the analysis steps of the process, rather than rushing and limiting the findings to description only.
Based on the above, evaluators should ideally always consider the following four dimensions to ensure that they are able to identify and learn from failures in order to enhance the quality of future interventions: (i) cognitive bias, (ii) selection/design bias, (iii) the timing of the evaluation, and (iv) stakeholders’ influence.
Failure is an opportunity for future success - provided that we identify it and understand it.
Henry Ford wrote that failure is the best opportunity to begin again more intelligently.
And Paolo Gallo said “We learn by making mistakes. If I'm wrong, I do not forget. I learn and I can explain it to others”.
Ahmed Ag Aboubacrine is a development specialist who specialises in design and MEAL systems in both public and private sectors. He worked at CARE International and IsDB where he played critical roles in evaluation capacity building, country and sectoral programs’ quality, and impact measurement. He has a Bachelor’s degree in Statistics and a Master’s degree in Decision Making. He regularly advises senior management and country teams on corporate strategy, quality assurance and performance management.
Excellent piece. Thank you for reinforcing my beliefs
Thanks Maiwada. Am glad to see that we converge towards the same key learning
This is fantastic and I have learned a lot about failure.
Thank you
Thanks Abubakar for your comment. Am glad that you find this useful
Thanks for sharing Ahmed. Very pertinent indeed.
Thanks Mustapha for your comment
Thanks a lot, especially on the four dimensions
Hi Pantaleon,
Am glad you liked the four four dimensions. Out of these, the most critical and challenging one is the stakeholders influence, which is not easy to detect. The cognitive bias and the selection / design bias may be detected and addressed through a thorough review or peer-review of the evaluation design (including the methodology and tools) and that of the preliminary analysis in the draft report. To ensure detection of the extent of stakeholders’ influence, we need to exercise proper comparison and triangulation in addition to investing in mastering well the context of the intervention.
great Ahmed Aboubacrine thanks
Thanks Isha for your comment. Am very humbled to see that this is useful.
This has got me emotional because it so much resonates with what I just pointed out at the AEA conference in Minneapolis ‘ALL THAT GLITTERS IS NOT GOLD’ powerful. My argument was that success of projects and programs are measured in how many people have been reached out and in some instances by how much money has the government saved through funding from public-private partnerships..
Hi Tiroyaone. I attended the AEA conference virtually this year and enjoyed it – so inspiring !!!.
Am glad to see that this article resonated well with your contribution there. Indeed the reach of a given project / program is not a good enough measure of effectiveness. The quality of the later is rather determined not only by the reach but also by the quality of services rendered by the intervention. And we are not even questioning yet whether the intervention results are sustainable.
In terms of saving from the PPP, governments ought to measure their success beyond money. The most effective ones focus more on measuring and reporting on the outcomes financed by the resources freed-up by the use of PPP. For instance, a government may measure and communicate the results achieved in education and employment sectors thanks to the resources gained from using PPP in its energy sector.