Guest post. Drawing from Complexity Science to do Evaluation: Knowledge to Drive Operational Decisions

Share this post

**5 min read**

Jonny Morell is one of evaluation’s sharpest thinkers in the realm of systems and complexity sciences. He is always exploring new frontiers, experimenting with and advancing how we can – and need to – think about what we do. His well-known book, Evaluation in the Face of Uncertainty, is an excellent example in which he describes and analyses the different kinds of ‘surprises’ we may meet when doing programme evaluations. His work deserves a high profile among those interested in moving evaluation theory and practice forward. Through this blog post Jonny asks us to help with in his thinking by critiquing an article he is writing on the use of complexity science in evaluation – see the link at the end of the post and take on the challenge of furthering his thinking!

“I am going to analyse the data with statistics”.
“I am going to apply statistical thinking to my data”.
“I am going to use logistic regression to analyse my data”.

The first statement is true but useful only to point at a realm of knowledge. The second and third statements are more conducive to meaningful explanation. If someone asked me about statistical thinking, I would  talk about true score and error, what it means to sample, that probability estimates can be attached to observations, and so on. When I finished, my audience would understand the logic and mindset that I am bringing to my work. If someone asked me about logistic regression, I would explain that it is a statistical method that works for classes of events, that it can be used for prediction, and so on. If they wanted to apply the method, I would get into the gory details .

As with statistics, so it is with complexity. We in the Evaluation community tend to invoke complexity in a way that is analogous to the first two statements. We recognise that the world behaves in complex ways, and that to do good evaluation we must appreciate complex behavior. (I like to think in terms of complex behavior because I do not know what a complex system is. But by whatever definition of a complex system, I do know how they behave.)

Using Complexity to Make Operational Decisions

 What gives me pause is that too little evaluation invokes complexity in a way that approaches the third statement, i.e. something that points to a particular construct and applies that construct to make operational decisions about models, methodologies, data interpretation, or how we speak to our customers. I want to nudge our use of complexity in that direction. Here are some examples.

Example 1 – Causal linkages

I can design an evaluation based on the assumption that in addition to specifying an outcome, it is also possible to specify a set of linked intermediate outcomes that will tell me how much progress I am making. Or I can design an evaluation based on the assumption that it is impossible to specify intermediate outcomes such that knowing them will indicate progress toward achieving the desired goal. Or I could tell my customer that in retrospect I can tell him or her what the intermediate stages were, but that future planning cannot rely on a repetition of that causal chain. To which evaluation design should I commit your tax money and mine? The answer depends on what I believe about the consequences of sensitive dependence in a network of potentially causal factors.

Example 2 – Innovation adoption:

Imagine an evaluation that requires assessing the success of an innovation adoption effort. Obviously, a successful adoption pattern would follow the extensively demonstrated “S” curve. There would be one inflection point when the rate sharply increased, and a second when the rate would flatten as a function of the percentage of adoption in the population. Knowing that, I might design an evaluation that looked at factors such as the number of potential users who were contacted, the number of influencers who were involved, number of demonstrations conducted, and so on. But I could also make the effort to trace the network of contacts among adopters, expecting to see a fractal pattern if adoption were fueled by a preferential attachment process.

Example 3 –  Adaptation

Imagine evaluating an AIDS prevention program. The essence of the model might be: design program > implement program > deliver services > debilitating effects of AIDS decreases > various consequences of improved health in the AIDS population increase. That is a reasonable and defensible model, but another equally reasonable and defensible model might be conjured in the mind of an evaluator who thought in terms of programmes as organisms sharing an ecosystem. In that model the AIDS program would be cast as a new organism in an ecosystem composed of programmes delivering primary health care, maternal health, tertiary services, and so on. Whatever the available resources, those programmes will have adapted to be about as good as they can be. How might they evolve when the AIDS program draws money, skilled staff and policy making attention away from existing services?

In conceptual terms, the complex behaviors in each of these examples give us valuable insight about how programmes operate and what they accomplish. They all cast novel light on our understanding of pattern, predictability, and how change happens. In practical terms, they have implications for decisions about  money, time, and political capital.

Critique Needed

I am producing a fuller explanation of how I think evaluators should draw from Complexity Science in an article I am writing, A Complexity-based Plan for Evaluating Transformation. The complex behaviors I use are attractors, emergence, and sensitive dependence. It’s in draft form and I need critique on any or all parts of it. Go here to see the abstract. If you want to point out the errors of my ways, send me email and I’ll send you the article. Thanks in advance.


Share this post

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top