COVID-19, Part 4. Accelerating

Share this post

**9 min read**

Do you agree with the following five observations? If so, the time is right for evaluation professionals to align.

By 'align' I do not mean coordination or doing away with healthy competition. I refer to sharing some broad goals as evaluation professionals who wish to contribute to the world. Sharing some common priorities, methods and resources. Working towards impactful communication about these. All with the intent to foster evaluative practice that can contribute meaningfully to a sustainable world at this time.

It means actively developing and sharing state-of-the-art knowledge on priority topics across evaluation professionals’ silos, while still maintaining diversity in purpose and action.

World-Covid19It means educating current and new generations with all this in mind – using some common themes and insights while still tailoring curricula and actions for each context.

Alignment aims to accelerate the contributions of evaluative practice to the dramatic changes of which we are part, or that we need to help foster.

There is so much potential in this field of work, so much to be excited about, so much to do and learn.

Through COVID-19, many old ways of working, old norms and old systems around the world will be challenged. This is the time to work with new concepts, experts and power brokers in fields with which we are not familiar, and to work creatively to align our efforts.

Yet we find it difficult to work across the silos in our field. Or past the silos in our heads.

Is this the time to commit to using systems thinking and complexity science more frequently to promote systems change and transformation towards sustainable development? And to be more ‘together’ while doing so?

At the end of this post I explain what a few of us think can be done to contribute to such alignment.

My observations

One – Our field of work as evaluation professionals is exciting, valuable, continuously evolving. I am grateful every day for the privilege of working in a field that has to respect and integrate multiple worldviews, knowledge and value systems, contexts and experiences. Evaluative practice is uniquely positioned to inform the design and implementation of efforts to influence systems. Wonderful people from many parts of the world advance evaluation theory and practice in many different ways.

We have the very important benefit of doing work that is intended to be immediately useful and used. This is an advantage that research, our sister-field-of-work, does not have.

We also have the explicit task to help make the world a better place.

What privilege, what luxury!

And yet ….

Two – We have to do much more to display the full value of evaluation for a new era. As the COVID-19 pandemic races around the world, evaluation struggles for space. Research studies and data overwhelm, yet evaluation professionals and studies are not present at influential tables. We have fumbled in proving the value of evaluation for the challenges facing humankind. Let us do our best to show the value of evaluation once the immediate heat of the pandemic is over and we move into sense-making in a changed world.

Three – We have to move faster towards a systems- and complexity-informed view of a world in need of transformation towards sustainable development.Integration’ and ‘transformation’ are at the heart of the 2030 Agenda with its SDGs. Systems thinking and complexity science have infused evaluation practice related to the environment and climate change. This is still rare in other fields of evaluation. The implications for evaluation of a systems view of life, and of a world in need of transformation, are seldom reflected in our traditional questions, criteria or methods, or in our curricula and short courses. We do not yet quite know how to move beyond simplistic views of ‘evidence’ and ‘success’ based on breaking things into small parts. For example, evaluating project or programme interventions without considering the systems that are being, or have to be, influenced through intervention or grant portfolios that work in synergy.

FourWe have to get better at demonstrating, sharing and learning across many types of silos. Many evaluation professionals consider systems- and complexity-informed work too challenging. They point out that knowledge and capacities are too limited. That political or institutional masters only want simple numbers. That incentives are too poor to try to change practice significantly. That good illustrative cases are not readily available. That the timelines and resources allocated by commissioners of evaluation are too limited to do justice to this type of work.

We also have methodological, thematic, sectoral and disciplinary silos to contend with in this field. We seldom share and digest well across our silos and echo-chambers: North-South cooperation (‘aid’), South-South cooperation, individual governments, indigenous societies, impact investors, social entrepreneurs, philanthropic foundations. The researcher/evaluator divide. The distance between evaluation specialists and those working in adjacent fields like data science, cognitive and neurosciences, behavioural science, management science, artificial intelligence, complexity science and future sciences. All these silos slow down our ability to move forward.

Five – We have to work to see and move beyond dominant narratives and frameworks and practices. Evaluation has been overwhelmed by stale narratives and stereotypes about the nature of development and prosperity. About how change happens. About ‘what works’ and ‘why’. About how to get out of poverty traps. About what criteria to use and what questions to ask. About evaluation itself. We cling to what we have been exposed to in our first, or dominant schools of thought. We use outdated models about how development or institutions or economies or rights or political or governance systems work.

Humanity has gained much over the past century, but we have lost much too – including the holistic and systems view of life characteristic of old, mostly indigenous societies and their rich cultures. We need to become better at integrating cutting-edge technologies and other advances with age-old philosophies, wisdoms and lived experiences. This may well lead to solutions that will allow humanity to live with sophistication, in harmony with nature.

Initiatives encouraging alignment in practice today

The SDG Transformations Forum helps individuals, institutions and networks to see the systems in which they work, connects them across many disciplinary and sector boundaries, and takes or supports radical action aimed at making a difference. Six (still voluntary) ‘steward teams’ support priorities that experts identified as crucial to unblocking severe constraints to successful transformation – Capacities, Governance, Finance, Innovations, Narratives, and Evaluations. A cross-cutting Systems Mapping and Analysis group is also emerging. What I have found most helpful is that the Forum provides a unique opportunity for evaluation professionals to connect to diverse types of expertise relevant for systems change, transformation and sustainable development.

Blue Marble Evaluation (BME) provides a home for evaluation professionals interested in global systems change and transformation in the era of the Anthropocene. It is an integral part of the evaluation component of the SDG Transformations Forum, at the forefront of state-of-the-art in thinking and working within a highly interconnected world. Its relevance for the future is obvious.

With EvalPartners and IOCE at its pinnacle, the global evaluation architecture provides a great system to reach and mobilise evaluation professionals across many worldviews, sectors, stakeholder groupings, countries, methodological ideologies, and more. We may need an ‘EvalTransform’ as part of its architecture. But this will likely create a silo; it might be better to ensure that a focus on complex systems and transformation cuts across all IOCE and EvalPartners’ components’ activities. This architecture also encompasses important education efforts, whether through context-tailored postgraduate degrees or short courses, or through international education initiatives such as IPDET, CLEAR, UNITAR, DETPA and BetterEvaluation.

Other coalitions and networks engaging with evaluative practices of different kinds also provide opportunity for some form of alignment. We have networks in the impact investing, philanthropic and academic sectors doing interesting work in different types of evaluative practice that largely fall outside the architecture of EvalPartners and IOCE. There are some solid bridges that can enable us all to value and learn from one another though often very significant differences in objectives, incentives, approaches and language have to be overcome.

We do not need a heavy structure or another big global effort. We need agile, context-specific efforts to make use of what exists, and that enable us to cross boundaries that normally constrain or blind us. Generating and effectively sharing state-of-the-art knowledge from all over the world to advance practice; including updated priorities and concepts in curricula; and applying such new insights in different contexts to feed back into theory and practice, are some of the main threads of action that can link us all.

This focus - embedding systems thinking and complexity concepts in evaluative practices to help accelerate systems change and transformation towards sustainable development - is an area of work we would like to see flourish as part of a new Global Evaluation Agenda and the Decade of Evaluation for Action.

One example as food for thought

A Transforming Evaluations Hub or ‘Accelerator Lab’ (aka UNDP) is under development in Geneva. Recently initiated by a few of us to complement other efforts initiated (but not managed or coordinated) by the SDG Transformations Forum, we see it as one in a network of initiatives, all aiming at:

One, strengthening practice around evaluation in support of systems change and transformation towards sustainable development;

Two, braving the challenge of applying systems thinking and complexity science to evaluation practice in ways that are neither simplistic nor over-complicated;

Three, respectfully using knowledge, philosophies and knowledge systems from diverse experiences and parts of the world; and

Four, focusing on the interest in evaluative practice among some of the national and international bodies and initiatives located in Geneva.

As appropriate, it will engage with the organisations noted above. The initiative intends to focus on four interconnected priorities, starting small but with the potential to expand:

A catalyst for systems of transformation.  Connecting change-makers across disciplines, sectors and other boundaries in (sustainable) development, and supporting them in their systems-informed evaluative practice.

An integrator of state-of-the-art.  Rapid reviews and syntheses that connect and make accessible for practical use systems- and complexity-informed insights from theory as well as documented practices and case studies.

A focus on innovation in practice.  Encouraging and supporting organisations to experiment with the state-of-the-art in this field of work in their intervention portfolios and evaluation initiatives, which can in turn be used to learn and advance the field.

A place to learn.  Interactive knowledge-sharing and opportunities for formal and informal, possibly on-line education in state-of-the-art knowledge in this area.

We already have much to draw from

While it will be very important to continue to learn from other disciplines and fields of work, there is already much to use in systems- and complexity-informed state-of-the-art.

Systems Innovation (SI) is a small organisation that has produced a large volume of excellent, widely-read and -viewed open source e-books and videos on many theoretical aspects of systems thinking and complexity. More than any other I have come across, they have been able to translate complicated concepts with sophistication into language and ideas that are accessible and in formats that work. We need to tailor some of these SI materials into similarly ‘easy to digest’ resources, with examples of how they are and can be applied in practice - for different types of evaluation professionals, and for evaluative practices that can support efforts at transformation towards sustainable development.

Blue Marble Evaluation (BME) principles provide a sophisticated overall framing. We can use what we know from Developmental Evaluation, Realist Evaluation, Principles-Focused Evaluation or Dynamic Evaluation. Transformative Evaluation is also a relevant framework, albeit for social justice evaluation and arguing for changes in societal systems rather than applying complex systems concepts as part of the approach.

Examples of relevant recent guidance and frameworks include UN Women’s ISE4GEMS, GIZ’s Getting Ready for Transformative Projects, SDG Evaluation by EvalPartners/IOCE, CECAN/DEFRA’s Framework for Complexity Evaluation, and CIF’s Transformational Change Learning Partnership initiative, and many more – too many to mention - contributions by evaluation offices and professionals working in environment and climate change, resilience, and related fields. And let us not forget IDEAS's Prague Declaration on Evaluation for Transformational Change in October 2019.

I conclude with this inspiring post by a fellow South Africa, Sonja Blignaut, who works with Dave Snowden in Cognitive Edge.

Note: I made a few minor edits on 21 April at 7am CET, including removing reference to a meeting on 22 April which was too specific for the intent of this post.

Share this post

Leave a Comment

Your email address will not be published.

Scroll to Top