**3 min read**
As one of three facilitators at a recent 3-day workshop organised in Connecticut by Rockefeller Philanthropy Advisors (RPA) on ‘Assessing Systems Change’, I shared the following tips to inspire evaluation professionals to start integrating (complex) systems thinking into their practice. Many already do this without even realising that they are dealing with such concepts.
One: Be informed, and innovate. Strive to understand and use the basic concepts of complexity. Read about the state of the art. Make it a professional aim. But don’t wait for ‘tools’. Although many established methods are useful for systems-informed evaluation, there are no methodology recipes. Try to expand what exists. You can.
Two: Map systems. Use systems mapping (see also here) to ‘see’ a system (or interconnected systems) and determine its boundaries – whether designing or evaluating interventions, or trying to understand and evaluate systems change.
Three: Use theories of change carefully. Don’t focus on drawing boxes and arrows, or cluster a bunch of stuff in logframes. Try to determine the main operating relationships, and always analyse the underlying assumptions. They are in my view the most important focus for analysis.
Four: Question dominant models and narratives. Work intensively with the implications of different societal worldviews, perspectives, values, norms and contexts. Do NOT automatically buy into dominant models and narratives about development, or about how change works in a particular society. They are very frequently too superficial, dominated by a simplistic view of the world.
Five: Shift attention back to design and implementation. Shift focus in evaluation from ‘impact’ back to design and implementation when dealing with any form of intervention – but in ways different to what was done decades ago. Instead, while retaining some focus on outcomes and impact, focus more intensively on whether the design or implementation engaged with some insights about complex systems. In other words, evaluate design approaches or principles, or do process evaluation, that focuses on matters such as whether alignment, synergies or synergistic effects were created or considered internally and externally; leverage points (catalytic effects) sought; or influences on progress or performance anticipated or considered in order to adjust when needed.
Six: Use big, thick and warm data. Every evaluation will need a ‘bricolage’ – a combination – of methods. Use mixed methods, combining the strength of each, and work with ‘big’, ‘thick’, and ‘warm’ data wherever this makes sense.
Seven: ALWAYS systematically track unanticipated outcomes. Especially negative ones. For example, the Green Revolution increased cancer and farmer suicide rates, and some gender programmes disempower men enough to turn them violent.
Eight: Analyse power in systems. ALWAYS reflect on where power in the system lies, and what that means for a system, or for intervention and its evaluation. Map and use this in analyses.
Nine: Work with trajectories. The world is not static. Things co-evolve – such as history and culture, or markets and institutions. Think of trajectories of change (see also here) not static snapshots, when considering ‘impact’; under some circumstances poor achievement at this time might be only a dip caused by a particular set of circumstances, or even the start of a trend towards success. See what patterns arise.
Ten: Start small. If implementing all these aspects sounds too complicated, take small steps:
- include the question “why” everywhere – this identifies the influences on the intervention, indicating interconnections
- include just a few complexity-aware evaluation questions in the next RFP
- use evaluation criteria that are complexity-aware, and/or
- draw a useful systems map that can inform the theory of change.