Evaluation’s Journey towards the Future, Part 3. The tributaries that make up our field

Share this post

Evaluation does not flow from a single source. Like a mighty river fed by countless tributaries, today's evaluation practice draws from streams that rise in every corner of our world—each shaped by distinct landscapes of purpose, power and perspective. Yet our field's official histories often recognise only one dominant current, obscuring the rich diversity that gives evaluation its true strength and wisdom.

If we want to chart new futures, we must first map the waterways that brought us here.

I invite you to wade into these currents with me—from the engineered canals of technocratic systems to the ancient springs of Indigenous practice—and discover how they might flow together towards more inspiring evaluation futures.

Engineered Canals: The Technocratic Stream of Government and Global North Dominance

Government and aid-driven evaluation practices have often in the past resembled an engineered canal—rigid, controlled, linear and designed primarily for accountability. These approaches used to prioritise cost-efficiency and funder accountability, and still demand compliance with still largely linear theories of change, continuing to lean on frameworks like the OECD DAC criteria, heavily structured logical frameworks and linear theories of change. More often than is desirable, methods such as randomised control trials dominate, reinforcing a narrow definition of “rigour”.

This stream is fed by a belief in evidence-based policy. Yet critics argue that its foundations and methods can easily lead to 'policy-based evidence', where evaluations are designed and interpreted to support predetermined policy decisions rather than genuinely informing them.

In the Global South, government evaluation has become more like a delta—diffuse, messier, adaptive, shaped by colonial legacies as well as local innovation. Rwanda's Imihigo system, for example, merges traditional performance contracts with modern metrics, while Colombia’s SINERGIA system integrates citizen monitoring with government performance tracking. Yet power imbalances persist: donor-driven frameworks still overshadow local innovations, and all struggle to balance imported models with grassroots realities.

Swift Currents: Market-Driven Evaluation in the Private Sector

Here, evaluation flows fast and transactional, like a swift tributary prioritising ROI, agility and consumer insights. Technology companies use real-time dashboards, A/B testing and predictive algorithms, while corporate social responsibility (CSR) teams often assess impact through ESG metrics even though this approach is now shifting under growing scrutiny.

This pragmatic stream tends to flatten social complexity into rather simplistic quantified indicators, often privileging metrics over meaning, although more sophisticated businesses increasingly employ mixed-methods approaches to capture the qualitative dimensions of impact. Meanwhile, social enterprises in the Global South attempt to fuse market logic with participatory methods to capture community-defined impact.

The private sector's approach remains distinct in its emphasis on speed and shareholder interests over much more complicated efforts at systemic change, although this is slowly changing as stakeholder capitalism gains some traction. Yet its tools, such as predictive analytics and real-time feedback loops, increasingly influence philanthropy, development and humanitarian sectors.

Braided Waters: Justice-Seeking Evaluation in Development and Humanitarian Work

This resembles a braided river—multiple channels interlacing rights, equity and survival in shifting and often turbulent channels. Development evaluation emphasises participation, co-creation, empowerment and sustainability. In humanitarian evaluation, ethical dilemmas loom large: how to measure impact amid crisis, and how to uphold dignity in data collection. Feminist evaluation, intersectional and DEI (diversity, equity, inclusion) movements continue to challenge established power structures, asking not just what works, but for whom, under what circumstances and at what cost to marginalised voices; and how we evaluate is as important as what we evaluate.

This stream faces a fundamental tension: torn between donor demands for neat metrics and attributable impacts, and the messy realities of addressing poverty, conflict and the intensifying polycrisis that reflects the multiple interconnected global challenges that resist simple measurement or attribution.

Ancient Aquifers and Springs: Indigenous and Community-Led Evaluation Traditions

Indigenous evaluation represents aquifers and springs—deep, rooted in place, fed by ancestral knowledge, flowing at the pace of trust, and according to values of relationality and reciprocity. They are not mere methods, but worldviews, where knowledge is situated, ceremonial and lived.

Māori evaluators use whānau (family) approaches, using communal dialogue, spiritual connection and land-based indicators. Canada's First Nations emphasise 'Two-Eyed Seeing’, integrating Western data with Indigenous wisdom. Aboriginal Australian approaches centre dadirri—deep listening and patient observation. Central American Indigenous communities use vivir bien (living well), as evaluative framework emphasising harmony with nature. These practices reject extractive methods and redefine 'rigour' as relational accountability, where the integrity of process and relationships defines the rigour of the evaluation.

The clash with technocratic approaches is stark: Where aid-driven evaluation might measure forest conservation through satellite imagery, Indigenous evaluators might prioritise whether the forest feels healthy to those who live in it. Where fellowship programmes might emphasise individual achievement, Indigenous evaluators value relationships and contributions to community that strengthen the web of life.

Hidden Brooks: Evaluation in Professional Practice

Not all evaluators wear the label. They move unnoticed, small brooks weaving through daily life, rarely intersecting with ‘formal’ evaluation circles—teachers assessing student growth or doctors tracking patient outcomes.

Their practice is intuitive, responsive and rooted in lived relationships. It centres practical wisdom and context-sensitivity. A nurse evaluating community trust in vaccines is not citing OECD criteria; she reads faces, listens to emotions and concerns, and adapts accordingly.

These hidden evaluation practitioners remind us that evaluation is not only a profession or systematic practice. It is a deeply human impulse to learn, adapt and improve.

Digital Rapids: AI, Big Data and Technological Disruption

A fast, churning tributary is now reshaping the riverbed itself. Digital evaluation promises speed, scale and the seductive illusion of ‘objectivity’. From sentiment analysis to real-time dashboards, digital tools are rapidly transforming what we know and how we act, risking the reduction of human complexity to algorithms. This tributary also threatens nuance. Algorithms can reproduce systemic biases, while machine learning models often obscure assumptions behind the opacity of a black box.

Digital tools are reshaping streams differently. In the Global North, ethical debates around AI and data privacy dominate. In the Global South, practitioners also frequently repurpose technologies for local needs—such as favela activists in Brazil using social media to document police violence, creating grassroots assessment of state accountability, effectively turning platforms into real-time evaluative tools. In East Africa, farmers use mobile platforms to evaluate agricultural extension services, bypassing traditional donor-led practices.

Where Waters Meet: Navigating Confluence and Conflict

These tributaries do not flow in isolation. A single programme might draw from government metrics, participatory design, Indigenous worldviews and AI-powered analysis. But frictions persist, for example over whose knowledge counts most. When donors or governments or the private sector insist on inappropriate methodologies, they effectively dam or divert the ancestral and justice-seeking streams, privileging narrow definitions of 'rigour' over contextual relevance, and marginalising plural forms of knowledge.

Reflection Questions

  • Which evaluation traditions influence your own practice?
  • Where do you see power dynamics shaping which approaches are valued or dismissed?
  • How might different evaluation streams better learn from one another?

The River’s Call

As we navigate the future of evaluation amid polycrisis, decolonisation and technological disruption, we stand at a critical confluence. The river's vitality lies in its diversity—the interplay of traditions that challenge, complement, enrich and transform one another. Yet history has too often elevated one stream while damming or diverting others, privileging dominant voices as the only legitimate sources of evaluative knowledge.

Our challenge now is not merely technical, but deeply philosophical and political: Who defines what evaluation is and should be? Whose knowledge shapes its flow? And how might we create conditions where all tributaries—technocratic, Indigenous, market-driven, justice-seeking and more—can bring their wisdom to a more holistic and equitable evaluative practice?

The future of evaluation does not lie in privileging one stream over others, but in cultivating the wisdom to know which waters best nourish which landscapes of human effort—and having the courage to let the river flow where it must.

Share this post

Previous and Next Posts

5 Comments

  1. Elma Scheepers on May 13, 2025 at 12:40 pm

    Dear Zenda, just love this “These hidden evaluation practitioners remind us that evaluation is not only a profession or systematic practice. It is a deeply human impulse to learn, adapt and improve.” After 30 years in the field I noticed the following. Getting consciously or unconsciously hitched to the latest headless new invention or cadre or caste of evaluation cream of the crop in evaluation and then the madness of keeping up with the Jones’s so to speak in evaluation have created an emptiness where the “mensch” gets left behind in a lot of cases (here we can look at the latest AI developments)and that which was unique and really useful for forward development and learning becomes mundane and forgotten. We end up in a soup of technocratic and meritocracy…. a beauty lost.

    • Zenda on May 13, 2025 at 1:18 pm

      A great comment Elma, and as you say so well, we end up in a soup of technocracy and meritocracy, a beauty lost. In my own evaluation work I focus on what I consider the art and beauty of evaluation. It has to inspire and empower, infused by systematic techniques but without them being at the centre of how we think. This is also why I dislike the use of the term “tools”. We are not technicians or mechanics! Yet our practice often reflects something akin to this.

  2. Kennedy Oulu on May 13, 2025 at 6:06 pm

    @Zenda you raise some important reflections that imbue our understanding and practice of evaluation. As an evaluation practitioner from the lens of power, decolonization, localization and equity & inclusion, i submit that the future needs a new crop of evaluators.
    This week, i am facilitating an indigenous initiative on biodiversity and climate solutions called #podong or #basket from the Bangladesh extraction. Its philosophy is founded on: Indigineous-led action, self determination, traditional knowledge, power to indigenous peoples, indigenous peoples can manage environment/climate finance, context-sustainable solutions.
    Now, when i listen to the performance framework, it is to meet the donor demands, and i ask this conclave:
    1. Who created the performance measurement framework? /Power issue
    2. IPs in their presentations are interested in: How does our successful models of conflict resolution facility communal land tenure &rights? How do we leverage our communal relations to secure cultural heritage and sacred sites where legal frameworks does not allow for communal land tenure rights? , How do we ensure that every member of community of IPs understand, articulate and practise what traditionally works to nature/conserve biodiversity, livelihoods and community & planet’s health (Traditional knowledge), and how do we amplify the traditional knowledge to influence IPs inclusion, recognition and the practice of conservation, funding mechanism and nature-based governance systems?

    What emerges is: (self reflective questions)
    1. Whose power shapes the conception
    2. Whose impact is privileged
    3. Who protects and conserves hature and biodiversity
    4. Whose knowledge reigns?
    5. Whose accountability (to donors or the IPs aspirations?)
    6. What philosophy defines the MEL framework for such an IPs system?
    7. Why do we act like robots, when the voice and philosophy of IPs is so clear?
    8. If the IPs prioritize their learning questions in this programme, they are accountable to the outcomes, they will use their knowledge systems to drive actions that holistically align to their needs, they will sustain the system because the system and how it inter- relates is their life, livelihood and well being. Indeed their ‘ happiness’

    We can work with IPs to co-craft and pilot such a framework from their lens, so the development partners, stakeholders etc learn, change the policies & practices to work for IPs and others. This is what transformation in mindsets, impact/scale, policies and practice actually looks like.

    This may not fall in your categories, but it is within how the river of ‘evaluation’ flows

    • Zenda Ofir on May 13, 2025 at 8:22 pm

      It is excellent that you raise these issues in conversation with your Bangladeshi stakeholders, Kennedy. This is what we need. Your points relate best to the “braided waters” tributary of the river metaphor, and you give a good example. You also refer to the need to change not only our practices, but mindsets about how we approach our evaluation work – and how others see what we do, or what they can do when they engage with evaluative thinking. And for that we need, as you say, a new crop of evaluators and especially also evaluation commissioners – or at least persons who can adjust to new thinking in our space.

  3. Bob Williams on May 13, 2025 at 11:25 pm

    Kia ora Zenda. I connect with both the message and the metaphors. I’m in the final stages of writing a workbook or manual for using a systems approach in evaluation that specifically addresses issues of values and power. I always write introductions last, and this will help inspire me when I write it in the next week or so. Also, for some reason, I’m reminded of Kuhn’s Paradigm ideas and the experience of my brother, a marine microbiologist, who for nearly two decades was part of a group that argued for a radical and critical approach to change how we think about energy generation in the oceans. The older paradigm survived long after it had become obviously wrong. You and many others have been pointing out the issues raised in this note for many decades. The evaluation craft, on the whole, nodded politely but continued with the dominant methods and values. [I’ve experienced the same tendency in my attempts to get evaluation to adopt or adapt systemic practice … often desperate attempts to squeeze these ideas into the dominant evaluation framing] For all the wrong reasons, Trump has not only shown where the power really lies but also used it, exposing the contradiction that the evaluation craft has often acknowledged but collectively chosen to ignore. Its pants are genuinely now on fire. I doubt it could have been avoided given the nature of the dominant paradigm, but some did manage to work within the new paradigm, and it is now our job to build on what they achieved.

Leave a Comment