Evaluation is entering an exciting period – one that will challenge us all to think in new ways and to focus on new things. This blog post is the first in a series aimed at highlighting some of the most interesting considerations for evaluation in an era where the Sustainable Development Goals will be one of the main forces shaping “development” around the world.
In September 2015, all 193 Member States of the United Nations signed the 2030 Agenda for Sustainable Development. This important global framework for action with its set of 17 Sustainable Development Goals (SDGs), 169 associated targets and (for now) 231 indicators replaces the eight Millennium Development Goals (MDGs) with its 21 targets and 60 indicators that have directed or contributed to development in many countries in the Global South since 2000.
The 2030 Agenda emphasises a “robust, voluntary, effective, participatory, transparent and integrated follow-up and review framework”. It explicitly places monitoring and evaluation at the centre of efforts to ensure development success – the first time ever the evaluation profession has had such a high profile in a global level agreement.
In a forthcoming publication for the Independent Evaluation Office (IEO) of the United Nations Development Programme (UNDP), based on work I did for the NEC/IDEAS Conference held in Bangkok in September 2015, I discuss some of the main issues for consideration by those who focus on the monitoring and evaluation of the SDGs. These issues are based on seven key lessons from the MDG era, synthesised from an analysis of a set of more than 30 papers - mostly literature syntheses, expert task team studies and evaluations. Several reinforce the importance of the evolution from the MDGs to the SDGs.
The Millennium Development Goals
The MDGs are today considered part success, part failure. Most important is that they created momentum for development, galvanising political leaders and civil society to tackle multiple dimensions of poverty at the same time. Key strengths were that they were quite focused and had fairly simple targets, with indicators that were relatively easy to measure. It was easy to get people behind them, and they were widely used to advocate, and mobilise enthusiasm and funding for development. Monitoring at national, regional and global level, and the publication of international indices and “league tables” created competition and incentives to do better and meet targets.
The MDGs also had severe weaknesses. They were donor-centric and technocratic, focused on quantity over quality, and had a simplistic development narrative that perpetuated sector-based silos, and promoted welfare and aid dependence over growth and self-reliance.
Perhaps of most concern, aid and national budget allocations were frequently skewed, drawing attention away from long term perspectives such as attention to national systems (e.g. public health systems) and equitable growth, to short-term reductionist efforts, for example aimed at eradicating a specific disease or getting more children into school without consideration of the quality of the education.
The Sustainable Development Goals
Fifteen years later, the 2030 Agenda signals a different approach to development.
The SDGs are aspirational, designed to address many of the most obvious MDG weaknesses. They recognise the complexity of “development”. They reflect the need for a more holistic and context-sensitive approach and highlight the interconnectedness among countries, sectors and development interventions. They make “development” a priority for all countries, not only those in the Global South. They admit that resource flows other than aid matter, acknowledge that growth has to be context-sensitive and inclusive, with “no-one left behind”, and confirm the controversial principle of common but differentiated responsibilities.
They also de-emphasise the role of the United Nations and call for citizen-driven demand for public accountability. Most pertinent for the global evaluation community, they have a much stronger emphasis on follow-up and review processes that call for monitoring and evaluation at all levels.
Unsurprisingly, critics lined up before and after the signing of the 2030 Agenda to point out its unrealistic ambition, massive resource requirements and reliance on global economic growth and models of development perceived as likely to fail during an era of massive income inequalities and unjustifiable notions of endless material growth.
Seven lessons for attention of the evaluation profession in the SDG era
The following outline seven of the most important lessons derived from the analysis described above. All of them capture a number of issues that need to be the concern of the evaluation community. These issues will be detailed in a series of posts on this blog over the next few weeks.
Lesson 1. The value proposition of evaluation is not clear enough. The value of evaluation cannot be assumed or taken for granted. It must be demonstrated consistently and continuously.
Lesson 2. Macro influences emphasise the complex (adaptive) systems nature of development. Failure to understand and take into account the complex systems nature of development, including and especially macro level influences on national development trajectories, leads to ill-formed and inadequate strategies, decisions and evaluations.
Lesson 3. A key implication of “integrated” development has been ignored. For successful development at national, regional and global level - i.e., where development trajectories remain positive in the long term – interconnected goals and targets have to be achieved in a certain order, and in synergy with one another.
Lesson 4. Hidden influences can be debilitating for development. There are many very important yet often hidden influences on, or within processes and relationships that drive, slow down or block development.
Lesson 5. Adaptive management remains rhetoric. Development intervention based on “learning by doing” (or iterative experimentation), adaptive management and contextualised solutions remain largely in the realm of rhetoric.
Lesson 6. Definitions, data and interpretation lack nuance. The definition of concepts, as well as data collection, analysis and synthesis for development and evaluation often lack sufficient depth and nuance. Simplistic, un-nuanced approaches are not benign; they can distort and do harm.
Lesson 7. The nature and quality of evaluative evidence remains a serious challenge to credibility and utility. The nature, quality and utility of evaluative evidence depends on many factors, including the contexts in which it is generated and used. The current state of the art around “evidence” in development and evaluation can be called into question, including with respect to league tables used in international development.
Image Source: https://readtiger.com
Throwing down the gauntlet
By highlighting these seven lessons in the paper prepared for UNDP I hope to “throw down the gauntlet” to our evaluation community. I hope we will, as a collective, take on with acumen and gusto these and other significant priorities that are bound to challenge our profession.
I will discuss each lesson in posts over the next few weeks, and compare them to the priorities and strategies in documents that are intended to help shape evaluation in future, such as the Fifth Wave and the Global Evaluation Agenda 2016-2020.

Zenda Ofir is an independent South African evaluator at present based near Geneva. She works primarily in Africa and Asia, and advises organisations around the world. She is a former AfrEA President, IOCE and IDEAS Vice-President, AEA Board member, Honorary Professor at Stellenbosch University, Richard von Weizsäcker Fellow, and at present Interim Council Chair of the new International Evaluation Academy.
This is thought-provoking, Zenda, thank you!
This one in particular made me sit up and take note: “Lesson 5. Adaptive management remains rhetoric. Development intervention based on “learning by doing” (or iterative experimentation), adaptive management and contextualised solutions remain largely in the realm of rhetoric.”
Adaptive management is such a central principle in everything I’ve read on M&E amid complexity, that your point is worrying and intriguing. Very much looking forward to your blog on this topic.
Hi Cara, in my experience, the more pressurised managers and implementers are, the more “learning” gets neglected – and there is a lot of pressure around these days. And M&E is not yet delivering well on being very useful throughout the management cycle. Much data collection for M&E is done for compliance only, and most worryingly, many evaluations are too poor to be valued or valuable. Most importantly, our management courses in business schools etc. do not have a strong focus on the value that M&E can add, while the way evaluations are commissioned often does not promote a learning approach. These are tough challenges of which we should be aware and focus on addressing as a profession. We do not take them head-on, and relates to the fact that we struggle to project and prove the value proposition of evaluation effectively. As said, I hope to write more about it in a later post.