**5 min read**
I hope you will appreciate this interesting and pertinent blog post. I was in Beijing in 2003 at the inaugural meeting of IDEAS. Still quite new to evaluation, I was told that the impressive person taking the floor was famous in development evaluation circles. Our subsequent interactions over the years confirmed not only Robert (Bob) Picciotto’s standing and expertise, but also his humanity. In 2007 we invited him to Niamey, Niger to the Fourth AfrEA Conference as speaker in a Special Stream on “Making Evaluation our Own” in Africa. At the end of the week, he refused to accept our compensation for his expenses and asked that we use it towards the promotion of this work – which eventually became what is now known as “Made in Africa Evaluation”. His vision and commitment continue to shape evaluation in visible and often less visible ways. I am therefore very pleased that he agreed to write this thoughtful post. A more comprehensive article on moving towards a complexity framework for transformative evaluation can be found in JMDE.
Strategies for transformation
According to Steve Waddell, systemic transformation is achieved through a combination of destruction and creation on the one hand, and confrontation and collaboration on the other. Thus, different organisations contribute to social change in distinctive ways:
|Collaboration||Missionaries (Unilever)||Lovers (Gates Fundations)|
|Confrontation||Warriors (Greenpeace)||Entrepreneurs (Ashoka)|
The same conceptual scheme is highly relevant to social interventions, including evaluation. But evaluation is highly adaptable: given the wide variety of evaluation models on offer, the evaluation discipline cannot be pigeon-holed. Indeed, it can be structured to make a difference in the public interest, irrespective of the characteristics of the strategic context and the authorizing environment.
Evaluation as a complex system
Complexity thinking can help design evaluation governance by conceiving of evaluation as an adaptive sub-system. Specifically, it is useful to view evaluation as a social function located within a broader social system where feedback takes place between different levels of a hierarchy. Order parameters lodged within the evaluand shape evaluand action. They are activated by user-directed evaluation. By contrast, evaluator-directed evaluation triggers higher-level control parameters to influence evaluand behaviour indirectly, through changes in the order parameters.
Evaluation is user-directed when it is embedded in the evaluand. It is evaluator-directed when it is independent and stands at arm’s length from it. Interactions between order parameters located inside the evaluand and control parameters located outside it generate negative feedbacks that promote stability and strengthen hierarchy, and/or positive feedback that generates instability and challenges hierarchy.
User-directed vs evaluator-directed evaluation
The main value of evaluator driven evaluation lies in its independence from powerful interests which invariably seek to capture the evaluation process. Even highly principled internal evaluators are vulnerable to pressure from the hierarchy while external evaluators are controlled though fee dependence. In both cases user driven evaluation cannot readily speak truth to power.
This said, user driven evaluation has enormous advantages where all that is required is a gradual adjustment process, for example where only piecemeal social engineering is needed or minor course corrections are sufficient to re-align internal agents’ behaviour towards achievement of relevant goals. Thus, the ‘weak ties’ associated with user-directed evaluation are especially efficient and effective in stable operating environments.
So, where the strategic context only requires gradual adaptation, disruptive change is undesirable. Where the authorising environment is amenable to reform, confrontation is redundant. This felicitous combination is the sweet spot where user-directed evaluation is an effective approach that helps achieve decision makers’ goals through single loop learning. User directed evaluation is also appropriate when goals only need adaptation to moderate changes in the strategic context (double loop learning).
On the other hand, user-directed evaluation is inadequate where disruptive innovation is called for, and where hierarchy needs to be confronted. This is when deep and rapid societal change in the operating environment (a primary phase transition) is needed and where restructuring of the mechanisms and rules that govern the evaluand must come into play (triple loop learning).
Selecting the right evaluation model
User directed developmental evaluation is highly effective where the authorising environment and the organisation are both progressive. But what if vested interests have captured the enabling environment while the organisation is dedicated to social transformation? In such circumstances, user-directed evaluation involves advocacy.
Where on the other hand the enabling environment is democratic, but the organisation has been captured by vested interests, ethical evaluators should engage in adversary evaluation that implies resort to an evaluator-controlled evaluation model.
Finally, when the enabling environment is undemocratic and the decision-making organisation is controlled by unethical vested interests, only independent evaluation is appropriate – a subversive evaluation approach implemented in league with progressive local community organisations and civil society organisations.
|Collaboration||Advocacy Evaluation||Developmental Evaluation|
|Confrontation||Subversive Evaluation||Adversary Evaluation|
User-directed evaluation holds a comparative advantage over evaluator-directed evaluation where the strategic context is relatively stable, and collaboration is possible. On the other hand, evaluator-directed evaluation is the way to go where drastic reorientation of policy directions is needed. Equally, the low receptivity to primary phase transitions in the authorising environment requires confrontation through evaluator-directed evaluation.
In sum, the most resilient configuration in an unstable and volatile strategic context lies in a judicious combination: evaluation governance should strike the right balance between user-directed and evaluator-directed evaluation taking account of the characteristics of the strategic context and the authorising environment.
Finally, evaluation governance matters enormously, but evaluator competency matters too.
Using Waddell’s terminology, advocacy evaluators are missionaries; adversary evaluators are entrepreneurs; subversive evaluators are warriors… and developmental evaluators are considerate lovers! Experienced evaluators must learn to shift from one role to another depending on the evaluand, the strategic context, and the authorising environment.
Given the major social transformation changes required by the contemporary social and environmental predicament, versatility has become an evaluator competency imperative.
Robert Picciotto is visiting professor at the University of Auckland, a former Director-General of the Independent Evaluation Group of the World Bank Group, and a former member of the UK Independent Advisory Committee on Development Impact. He continues to provide independent evaluation advice to many governments and multilateral organisations.