Many years ago, at the beginning of my evaluation career, I read about ‘dialogue evaluation’. It is a simple mechanism that we can use more frequently to mediate between an independent evaluation and self-evaluation – two distinct approaches that many see as incompatible. Dialogue evaluation (see below) can have real benefits in this context, yet it is seldom applied when independent evaluation teams descend on countries or societal groups under evaluation – especially when the evaluation is at a strategic rather than micro (‘community’) level, and a ‘parachute in, parachute out’ evaluation team is used.
I have not seen it used to refer specifically to the systematic comparison of self-evaluation findings with independent evaluation (emerging) findings, and understanding the reasons for differences. It relates to what is regarded as dialogue in evaluation, or dialogic evaluation, which refers to engagement and interaction between evaluators and stakeholders, ideally but not necessarily to reach some type of consensus (see here and here). This is usually defined as ‘participatory evaluation’, often considered as anathema in independent evaluations.
Jennifer Greene wrote eloquently about dialogue in evaluation in a special edition in the journal Evaluation in 2001. She defines dialogue as “a vision of learning about our differences and moving towards legitimizing and accepting them”. “The purpose of such evaluative dialogue is to enable stakeholders to more deeply understand and respect though not necessarily agree with, one another’s perspectives. Such understandings, in turn, can engender more reciprocal, equitable and caring stakeholder relationships in that context, as embodied in an improved, transformed or even revolutionized evaluand. In these ways, dialogic evaluation constitutes an important democratic activity in society…. Dialogue …. directly engages the moral–ethical and politicized power relationships among stakeholders …. directly invokes the key interests in an evaluation, as it provides a space for the sharing and reciprocal understanding of these interests and for the possible and initial development of some common ways of seeing and acting, some collective perspectives on meaning.”
It has become good practice to ask management or executive teams to do a self-evaluation report aimed specifically at informing independent evaluations. Evaluation teams commissioned to do independent evaluations tend to use such a report as one of many inputs into their process. They seldom engage in a systematic analysis and then one or more systematic conversations – a dialogue that is detailed, respectful and for mutual benefit – about emphases, issues and (emerging) findings that differ between the two evaluative exercises. Too often such engagement is only at the end of a field visit or at the end of the independent evaluation during a ‘stakeholder workshop’ or request for a ‘management response’. This is often too late to analyse each other’s sources of evidence, their different interpretations, and the root causes of such differences. Or at least to develop mutual respect for the differences.
It will not compromise the independence of the evaluation to have such a dialogue with those who did the self-evaluation, and to continue to shape data collection, final conclusions and/or recommendations with these conversations in mind.
When opportunities for mutual learning, respect for alternative evidence or interpretations of evidence, and in-depth understanding get lost, it depletes the quality and credibility of the evaluation. A stronger focus on more systematic and analytical ‘dialogue evaluation’ in evaluation methodology guidelines can add significant value to the utility and credibility of independent evaluation processes.