I first met Jo when she was still at 3ie. We had an interesting ‘difference of opinion’ exchange over lunch! Already then, she impressed me with her sharp mind and passion for good evaluation. Now that she is Head of the Independent Evaluation Unit at the Green Climate Fund, our paths cross frequently, and it is always a delight. Her joie de vivre is contagious, chatting to her at events always fun, and her commitment to advancing evaluation in a way that serves her important organisation (and the field of evaluation in general) an example. No doubt she and her team are set to make significant contributions to our practice.
I head the Independent Evaluation Unit (IEU) of the Green Climate Fund, an office that was founded only two years ago. Since that time the IEU has transitioned from a one-person operation to an organisation with many young, passionate, professionals who take on tasks vital to our work. Many of them have asked me what new evaluators should think about when starting their careers. Interestingly, I am also often asked this question when lecturing at the School of International and Public Affairs at Columbia University, where I discuss impact evaluation in practice.
There are three areas where young evaluators can contribute deeply to the field and push meaningful change:
Top Tip 1. It’s the times, stupid!
What does that mean? We live in a fast-paced world with access to technology and data. For evaluators, long gone are the days when we could take a year to do an evaluation. We have to do evaluations quickly so that they are useful. This does NOT mean sacrificing rigorous methods in the drive for speed. Evaluation offices and evaluators need to build data and information systems that are ready to support quick, relevant and useful evaluations. This may seem like a significant undertaking, but it is not impossible. A very policy relevant impact evaluation examining the effectiveness of treated bed nets was done in Zambia in six months. The IEU did two thematic evaluations in five months and both were used by the GCF Board to inform their decisions going forward.
Top Tip 2. Think data and measurement.
For too long evaluations have not used population data or disaggregated and GIS data. Most evaluators think of data as ‘conversations with key informants’. This is, of course, critical. But in most evaluations, document review and interviews are used to the exclusion of systematic data sets that are quantitative. Both sources of data are important. The first type of data adds to understanding the nuances of questions and contexts as well as the complexities of change pathways. Quantitative data, on the other hand, gives you an idea of what to say about the population at a portfolio level through indicators such as medians and modes. While such indicators can be coarse, they can help craft an overall message.
Many evaluations have started to use median and mode indicators. See, for example, the Global Environment Facility (GEF) and also this collection of studies on Reducing Emissions from Deforestation and forest Degradation (REDD+). Indeed, several studies in this collection used disaggregated data to understand causal attribution and the effect of forestry programs on poverty and deforestation.
While thinking about measurement, it is important to consider whether results are useful outside of our context (external validity), within our context (internal validity) and replicable. For further context, the New Yorker published a great piece which covers the topic in detail. It is also important to think about cost-effectiveness of interventions and programmes so comparing different delivery modalities can be very useful for policy.
Top Tip 3. Think biases, behavioural insights and implementation research.
One of the key insights from using and combining psychology into economics has been understanding how evidence may or may not play a part in interpreting evidence (see this World Bank paper on biases amongst policy professionals) and also in climate change as this BBC article shows. I also gave a keynote on how we often fail to systematically address ‘last mile’ problems in our programs and commented on it in other fora a while ago.
It is also important for us to acknowledge the importance of ‘path dependence’. Implementation research is critical in better understanding this area. Process tracing has been used to understand what works in a variety of fields. While using these, it is also important to triangulate. Too often we predicate our findings on an anecdote or one trend in the data. As a result, it is critical to take a deep dive into findings to explain the potential pathways of change. There is an increasing amount of literature and tools in this area that young (and old) evaluators can benefit from.
It’s a brave new world out there!