**3 min read**
Have you ever worked with a Research Maturity Model? Or an Evaluation Maturity Model? A ‘maturity model’ with well-designed sets of evaluative rubrics can be very useful in framing the level of maturity of almost anything in an organisational context. Progress made from one level to the next can then be tracked as capacities and capabilities in the organisation are strengthened.
I developed a Knowledge Management Maturity Model (KMMM) already 14 years ago in 2005, and a Research Quality Maturity Model (RQMM) in 2016. But they can just as readily be applied in evaluation, and can inform MERL (monitoring, evaluation, reflection, learning) in when dealing with grants portfolios, with change management in organisations, or with efforts that aim to strengthen evaluation functions and systems.
I came across the concept of a maturity model in 2005 when I was working as Senior Advisor on Knowledge Management to the IUCN Executive at their headquarters near Geneva in Switzerland. Maturity models, even if they are not called that, are now more in vogue – see an informative 2018 article about it here – but at that time they were fairly underdeveloped and not widely used in our field, although their origin is said to be in Capability Maturity Models in the 1980s. In the last decade or two they have proliferated in project management, process management, (organisational) change management, and so on. See examples here, here and here.
At that time – already in 2005 – I thought it was a great idea to apply in my knowledge management work in an organisation as complicated as IUCN. [IUCN is the World Conservation Council, mother organisation of the World Wildlife Foundation (WWF) and one of the most complicated organisations in the world, more so than almost any UN organisation. At the time it had a secretariat of more than 1,000 people around the world, formal membership of more than 300 government and more than 700 non-government agencies, and 12,000 scientists organised in six thematic areas supporting its work on a voluntary basis, including maintaining the famous Red List of Endangered Species].
My IUCN KMMM (figure 1 and Annex 4) was based on a diagnostic of the state of knowledge management in IUCN, and translated into a set of progressive evaluative rubrics that helped define how to measure progress in establishing a system and culture of knowledge management. I did this before I knew about something like evaluative rubrics; in essence, such maturity models are well-designed sets of collective rubrics grounded in theory and/or practice. They can be well used over time; see for example the detailed roadmap to implementation that I did for IUCN across the five levels of knowledge management maturity (Annexes 1 and 2).
Due to circumstances I never applied the IUCN KMMM in practice, but I was recently told by someone at IUCN that they have been using it well for internal purposes. So even today it is (still) useful.
If I have to do such maturity models today I will be more mindful of multiple available theories and a synthesis of experiences to better inform descriptions of each stage. I designed another, better framed maturity model in 2016, this time for research quality. It was more soundly rooted in the Research Quality Plus (RQ+) Assessment Framework that Tom Schwandt and I developed for IDRC in 2014-2015. You can find an extract as example of this Research Quality Maturity Model (RQMM) here.
I assume that by now 'Evaluation' Maturity Models aimed at defining and tracking progress in evaluation systems and evaluation cultures, or in research or development grant portfolios, exist in various shapes and forms. Evaluative rubrics are of course applied by some for rating-based evaluations, but as far as I know seldom as a collective set for managing grants portfolios or for tracking evaluation capacity strengthening efforts.
For those of you who are not yet applying evaluative rubrics in general, see E Jane Davidson and colleagues’ very useful discussions on the topic.
Let me know if you have come across any truly successful applications of maturity models. They can be valuable for MERL (monitoring, evaluation, reflection, learning) efforts, on condition that they are very well developed and rooted in solid theory and/or practical experience; considered as a framing, not as a prescription to follow rigidly to the letter; and also evolve as times change and lessons are learned.
Zenda Ofir is an independent South African evaluator at present based near Geneva. She works primarily in Africa and Asia, and advises organisations around the world. She is a former AfrEA President, IOCE and IDEAS Vice-President, AEA Board member, Honorary Professor at Stellenbosch University, Richard von Weizsäcker Fellow, and at present Interim Council Chair of the new International Evaluation Academy.