**8 min read**
In this short series of posts about the NICE Framework I argue for more intensive and creative research and innovation in evaluation – especially research that is rooted in, and respectful of important nuances in different societies. We need to create a strong research agenda for evaluation with solid linkages to application in practice – one that can be supported by many individuals and organisations worldwide.
Then we can help accelerate the development of evaluation theory and practice suitable for this new era, which is being shaped by the Sustainable Development Goals and the Fourth Industrial Revolution as well as incredible disparities, severe geopolitical competition for power and resources, and extremely worrying climate change and demographic trends. We are over-populating and over-consuming the world, with dire consequences for its ecosystems.
Among many other implications, focusing more intensely on research and innovation means that evaluation specialists, and especially those in the Global South, have to be knowledgeable about the societal philosophies and cultures in which they work, and about the dynamics that influence these contexts. It also demands that we do much more to bring scientists and practitioners in other very relevant disciplines sensibly into our research and innovation efforts.
Elements of a research agenda for evaluation
The research agenda I present here is based on the fact that we can approach research and innovation in evaluation from two opposite directions when using the NICE Framework:
One, we can start with the NICE Framework. In other words, we can start from our own (country’s) contexts and cultures and their influence on society’s dispositions (or attitudes and ways of operating). We can then explore the implications for evaluation (or any of its components) as it is currently conceptualised and practiced. Where are changes needed so that evaluation can be sufficiently rooted in the culture(s) and context(s) of each country and its societies? Where are instances where we can and should develop new theories and/or practices in response?
Or two, we can start with one or more of our current evaluation theories (or evaluation designs, systems, methodologies, methods, principles, standards, underpinning values, and so on). We can then analyse the theory in the light of the elements in the NICE Framework. We can see whether they can and should be adjusted, or are conceptually or in practice not really appropriate because their underlying values and assumptions do not work in our specific context(s).
Then we can innovate – make adjustments, or develop new theories, practices, etc. – guided by such insights.
This is what I highlight in the Research Agenda diagram provided:
The section at the top of the diagram highlights the fact that we need renewal in our evaluation theories and practices in order to get renewal in our craft, including in our evaluation education, ‘capacity development’ efforts, and in how we perceive quality and competency in evaluation.
The middle section of the diagram presents the two strategies (1. and 2.) for setting up the foci for research that I described above.
The bottom section of the diagram confirms the two approaches, and highlights the need to link all of this to (i) multiple disciplines and fields of work, including many that are not normally considered in evaluation, and (ii) development that is appropriately defined for this era.
What does this mean in practice?
For example, we can examine theory-driven evaluation (TDE) by starting from its theoretical underpinnings and how it is practiced:
How will theory-driven evaluation be approached in a society that is largely deterministic (parts of the West), compared to one that does not readily believe in the predictability of life and hence of change (parts of the East or Africa)? Will it still be useful?
Or among societies with vastly different notions of the nature and role of spirituality; or of context; or of causality?
Or those who think we cannot understand the part without understanding the whole, and who focus more on situational than personal dispositions (parts of the West vs parts of the East)?
What role can and should systems thinking and complexity science play if we want to apply theory-driven evaluation in such cases?
On practical matters:
What will theories of change look like in societies who have a cyclical view of the world (many indigenous peoples)?
Or where contexts change very fast due to the fragility of the society and its environment (many low-income countries)?
How applicable are our theories and examples in the literature about how change happens (say, for example, how capacities are built, or people are “empowered”), based on (usually) Western values and experiences? Will the same theories hold for other societies where, for example, the ’empowerment’ of individuals as it is conceived in large parts of the West might not resonate well with societies where harmony in society and the advancement of society trump individual advancement (parts of the East)?
What should our participatory processes of building a theory of change be like when we are working in societies where speaking up is not necessarily seen as a sign of being smart or on top of one’s work – and where public confrontation about differences or performance is not acceptable?
On the other hand, if we start with the NICE Framework as point of departure, we will examine one or some or all of of its ten elements from a specific societal culture and context; determine what theories and/or practices could follow; what these elements mean for our standards, guidelines, the values and principles on which we base our assessments, and so on; and then, to what extent this relates to the theories and practices in use today.
We have several constraints that challenges us in any effort to have a strong research agenda. These challenges are especially pertinent in the Global South, yet this is where much of the potential for research and innovation lies.
One, we need investors who are interested in (jointly) funding research on evaluation. Due to the professional status of evaluation in academia and elsewhere, the current systems do not focus on funding for research and innovation in evaluation – whether government research grants or support from the aid or philanthropic systems. Those with incentives and time to do research – evaluation think-tanks and universities – are still very few in the Global South; we do not have the necessary architecture to support thought leadership and research in evaluation. This has to change.
I wish we had a platform of funders interested in funding research and innovation in evaluation, who can support a well-coordinated common research agenda with priorities that will move us forward.
Two, we need more systematic experimentation in our usual practice. It will be great to take opportunities to test some ideas in practice, supported by evaluation commissioners as well as evaluators. Both the usual terms of reference and our own incentives (and sometimes capacities) to build experimentation and new ideas into evaluations are sorely lacking. Plus, it is not clear that either those who teach or who commission evaluations have themselves the interest or even capabilities to escape the conventional boxes in which our ‘training’ put us. We have to work to change that.
We need early adopters – and ideally a group of them who commit to doing that!
Three, we have to focus on making visible what we do in ways that are influential. Unless publishing papers counts a lot – as they do among academics and development economists – we have few if any incentives or time to publish sufficiently to ensure an official stamp on something we have done that is novel. This is something we should be able to fix quite easily. Initiatives such as the African Evaluation Journal (AEJ) and conferences help (a lot) but somehow we need not only greater visibility, but more influence to use for what is being done. And we need to understand and communicate better what is being achieved.
Four, the global evaluation system has to be better at supporting thought leadership in evaluation, especially in the Global South. We keep on conducting and getting courses in methodologies and methods and “tools” – almost always from conventional evaluation practice, with examples contextualised for our societies. This is necessary, but not sufficient! In the absence of vigorous academic activities there is almost no effort by anyone to build or mobilise ‘thought leadership’ in the South. Or even an understanding of those evaluation theories that underpin our field!
But we are also not sufficiently focused on creating new things. From what I can see, most of us are only too happy to follow recipes for ‘methods’ and ‘tools’. This is part of the reason for the new South-to-South Evaluation (S2SE) Initiative. We need to do more for example to follow up on ideas proposed at the Bellagio meeting on thought leadership in evaluation in Africa organised by CLEAR-AA and AfrEA in 2012.
Five, we can argue we should not make life more difficult for ourselves. We can just stick with business as usual. That is hard enough already! But then we might not support development well in an era when our societies, our ecosystems and our planet need it most.
It is up to each of us to decide where we want to spend our energy.
We have opportunities!
In spite of my lament it is clear that this is also a good time for renewal around the world, including in evaluation.
There are many opportunities to be creative in our studies, in our teaching, in our design and implementation of evaluations and evaluation systems. We just have to put our mind to it, learn from those who have gone before, and resolutely mobilise whatever resources we have – time, money, facilities, expertise – to encourage our best creativity.
I trust that the ideas around the NICE Framework and a Research Agenda for Evaluation will move us a step forward on this path.