Deborah Rugg’s Top Tips for YEEs: The 12 Secret P’s of Evaluation Success!

**10 min read**

Deborah is a highly successful, dynamic and influential evaluation professional with boundless energy that inspires wherever she goes – including many young people, as I observed when I had the pleasure of teaching earlier this year in her Executive Leadership Programme for Evaluation and the SDGs at the Claremont Evaluation Centre in New York. She has been a lifelong advocate for systems change, as her impressive track record in the UN and elsewhere shows. These VERY useful 12 Top Tips reflect some of the vast experience captured in her forthcoming book: How to Create Change: Adventures, Lessons and Confessions of a Global Evaluator.

Top Tip 1. Pose questions in plain language that even your parents could understand. Start with four simple questions to get stakeholders engaged. A smart evaluator knows the first step is to genuinely peak the curiosity and engagement of the relevant policy makers, decision-makers, and/or program managers and staff. They need to become inspired and not afraid to ask questions and become genuinely curious about how to make their policies or programs work better. Try inspiring them to ask fundamental starter questions like:

  1. Are we doing the right things?
  2. Are we doing these things right?
  3. Are they working, if so, why or if not, why not? How can we improve?
  4. Are we doing things on a big enough scale to actually make a difference?

These four essential questions get at the crux of what needs to be understood to get better results. The first question is about understanding the nature of the problem and implicitly the BIG PICTURE and if programs are realistically designed to be effective. The second question is essential because interventions can be proven effective under research conditions, but fail in the real-world due to poor implementation. The third question focuses on understanding if the program is actually making a difference, it is the question of outcomes and impact.

When programs are implemented with fidelity and quality in the real world; and are effective; the next question is are they being scaled up to address all in need.  We must always have the bigger picture in mind, even if the current evaluation is focused on just one piece. Inspiring a natural curiousness about these questions often leads to getting early buy-in and is fundamental to inspiring use of the results when the evaluation is done.

Top Tip 2. Start with program impact pathways and a sound pre-evaluation process. As one of the well-known gurus of evaluation, Michael Quinn Patton, advised us decades ago, good evaluations should always start with proper prior planning and dialogue. Listening to and effectively engaging all stakeholders in the evaluation process right from the start, before the evaluation begins, is critical.

I have found this to be as true as ever today in evaluations at the UN. We found while scoping, it was important first to establish a Program Impact Pathway (PIP) related to the program’s Theory of Change (TOC) (A program’s conceptual description and diagrammatic depiction of how the program operates, what it does, to achieve its short and medium-term objectives and longer-term goals).

Secondly, it is important to look for opportunities and possible future triggers points for increasing the eventual uptake of evaluation results. We learned to consider the timing of evaluations and optimal duration from the stakeholder perspectives. Understanding the demand and best timing for the results actually dramatically increased the likelihood of use. This scoping, PIP development, and pre-evaluation process is when genuine stakeholder investment in the questions, and consequently in their answers, is cemented.

Top Tip 3. The prevailing perceptions of policy-makers, and the people they represent, really matter. To be influential and get used, the messages from evaluations must be simple, compelling, and salient. Messages like, “half the kids in our school district are engaged in risky sex” are listened to. They have to be salient, i.e. delivered at the right time and place when decision-makers can actually use the information. Salience is about timing and timing is everything. If it doesn’t come when the decision-makers are making decisions, then it will not be used.

And importantly, the evaluator should be aware of the prevailing attitudes and values of the policy-makers and their constituents. This is a hard one, but there is a way. It will involve doing more homework to find like-minded leaders to start with, who can actually use the results to bolster their policies with their constituents.

Top Tip 4. You will need perseverance and considerable patience. Policy and program design is iterative. Fostering significant changes are hard and take time, so don’t expect the impact of your impact evaluation to be seen right away!  But never give up. Sometimes after a long struggle, a breakthrough can “come literally overnight”.

Top Tip 5. Genuine passion is an asset when channeled and should be nurtured. As it will take time, it really helps to care deeply about the issues you are evaluating, or least have a passion for evaluation. The passion for the higher goal will carry you through the hard times (or when the pay isn’t that great!) and help foster the energy needed to see the evaluation results through to the end, including supporting their use and application.

Top Tip 6. People-skills and partnerships help navigate power imbalances but are often undervalued and not taught.  While we are being patient and persevering for change, we need to: 1) learn fully from setbacks, 2) hone our people skills, 3) identify potential partners and allies, 4) establish trusting relationships, and 5) leveraging our networks and partnerships. No one can do it alone. To be effective in getting system change especially, you will need trusted partners. We need to teach young and emerging professionals more about leveraging and strategy.

Top Tip 7. Being perceptive, having empathy and perfecting your listening skills are non-negotiables to being a great evaluator. Evaluators have to be perceptive of the needs of constituents and stakeholders and appreciate the context they are in. To do this, it helps to have excellent listening skills and genuine empathy. These skills are not taught in graduate school. They are not taught in research settings or in firms that are about the bottom line. These skills are learned through experience, mentorship and a basic passion for seeing that truthful information really makes a difference in the world.  To have empathy for the people that we are trying to serve is what inspires us, motivates us, drives us to perceive and appreciate the “real context”. It is by truly understanding the context that we will enhance the value of our evaluations and the utility of our recommendations.

Top Tip 8. Write a two page persuasive position paper …and maybe a blog like this. Doing a very brief policy or position paper AFTER the evaluation report is done, helps enormously in the uptake of your findings. No one has time to read long documents anymore, so in case they didn’t read your report, they will get the keys points if it gets their attention and is easy to digest. These have been routine executive summaries in the past. I suggest they become more modern and persuasive- a) synthesizing concisely the issue(s), b) describing current knowledge c) what this study in particular found, and d) what you recommend is done about it by the key stakeholder(s), in clear, simple, and compellingly language.

Top Tip 9. Peer-to-peer learning and peer pressure is potentially powerful in enhancing learning and uptake of evaluation results. I have observed an increasing appetite for peer-to-peer learning. This means not learning about the results from just evaluators and expecting change to occurs after we do our eloquent presentations; but rather from getting leaders and managers together with their peers, to discuss case studies and success stories where their peers overcame challenges they can relate to and learn how they did it.

The appetite is for hearing from “someone in your shoes” who has done something about the issue in their setting that might help. In my experience evaluating sexual abuse in UN peacekeeping missions where mission commanders get together and discuss what they are doing and what is working for them, has more problem-solving power than evaluators standing up and reading results or sending them a report. Peers not only inspire ideas, they also provide the ongoing support for change.

Top Tip 10. We need a re-think on the nature, process and purpose of “evaluation use”. The utilization of evaluation results in follow-up actions can be truly powerful, but is not a uniform phenomenon. It is not a homogenous thing, to be understood from the eyes of the evaluator; but rather, a varied and dynamic construct, where the power comes when understood through the eyes of the stakeholders. We need to rethink or unpack our notions of “use” a bit better if we want to enhance the impact of our evaluations. “Use” actually involves a long continuum of activities, from listening, learning, debating, planning, taking initial actions, backward steps, and eventually designing better policies and programs that if we are lucky, might actually improve the lives of the people we seek to serve.

Top Tip 11. Producing evaluation policy is inherently political, takes patience, but helps improve programs. Policy development or change is often hard because it involves people with established disparate views, experiences, and fear.

Establishing new policies, such as an agency’s first Evaluation Policy to guide how evaluation will be managed, will be hard because people are not sure what will best serve their interests. Yet the process will likely require a consensus. When getting agency-wide, high-level or national level policy consensus is too difficult, drop to a lower level and take a more granular approach. When you face lack of interest, or resistance from some decision-makers, do your homework and seek out “like-minded” leaders and managers or stakeholders. It may help to drop back and refocus on sections, divisions, departments or sub-national levels, such as the state or district, or civil society or affected citizens groups to initiate support for the new policy process.

Building from the grassroots up to the top, versus the top down, was what the EvalPartners network 2015 – International Year of Evaluation initiative was all about. It was a good example of a grass-roots catalytic movement that spread worldwide and finally to affect policy at the top of the UN in New York. It also led to a global evaluation policy and agenda for 2016-2020. This empowered evaluators, and champions of evaluation, such as parliamentarians, everywhere to be emboldened and initiate the development of national evaluation policies.

Top Tip 12. Promote a “Quid Pro Quo” mindset and address the questions of the people and programs evaluated. This basically means if you are getting something from someone, you should give something in return. In the evaluation arena, I feel it is a valuable philosophy and principle and can really help to get stakeholder buy-in right from the start. That is, you should always try to consider your stakeholder’s questions from the start, and expand your evaluation questions list to not only include information you want to collect, but also on supplemental information or questions stakeholders have (if your funders/donors allow). If you can answer what they really would like to see answered in the evaluation, you can enhance buy-in and cooperation throughout, and increase the likelihood of use at the end.

In conclusion, in addition to incorporating these applied lessons into practice and the curriculum of graduate evaluation education, I think there are two things that agencies, universities and funders of evaluations can do in particular to help further the utility of evaluation in the future…and thus help us all.

First, they should strengthen support for the pre-evaluation and scoping phase, as well as the post-evaluation translation, dissemination and uptake phase, of all evaluations to facilitate more effective and faster uptake and application of results to social policy and program change. Without this, evaluations are often severely handicapped in making any difference.

Second, universities and evaluation training programs should enhance their curriculum, training and mentoring programs to focus on training a new cohort of strategic evaluation leaders not only technically skilled, but also trained in the “softer skills” involved in advocacy, communications (both traditional and innovative channels), facilitating the use and applications of results, motivating change and strategic visioning and local and global leadership.

Share this article

Deborah Rugg

With over 35 years of experience, Deborah Rugg has become an award-winning international evaluation pioneer, founding and directing the Claremont Evaluation Center - New York, the UN Secretariat’s Inspection & Evaluation Division, UNEG, and serving as Strategic Adviser to the US Mission to the UN in 2015 to ensure inclusion of evaluation principles in the SDGs. As UNAIDS/Geneva evaluation leader & MERG chair she led the Global AIDS Reporting System in 193 countries and launched the first M&E Field Adviser program. She started as a CDC EIS Officer in 1987 and currently serves on AEA’s Executive Board.

Leave a Reply

Your email address will not be published. Required fields are marked *