7 takeaways: Will the future of evaluation be shaped by its creativity and innovation in facing emergent challenges?
- 3 days ago
- 4 min read
In observance of the World Day of Creativity and Innovation, the ninth Future of Evaluation dialogue was convened on 21 April 2026 to explore how innovative approaches are reshaping evaluation in response to complexity, uncertainty, and rapid change.
The dialogue called for creativity and innovation to be seen as part of evaluation's long history. For eighty years, evaluators have kept changing and improving their methods to answer new questions, and today is no different. The discussion argued that the main challenge now is not finding new evaluation tools. It is about asking better questions about a more complex world, making room for more people to decide which questions matter, and updating the organizational rules so that evaluation processes and methodologies have flexibility. In case you missed the conversation, watch the recording.
Seven quick takeaways from the dialogue
Creativity has shaped evaluation for decades, not just since AI. The discussion walked through eighty years of change in evaluation. In the 1960s and 1970s, evaluators learned to combine formative and summative methods and to link evaluation with policy. In the 1990s, the long debate between qualitative and quantitative methods was settled by mixing both. The 2000s brought systematic reviews and evaluation mapping. From 2010 onwards, developmental, systems and transformative approaches took shape, and transformative evaluation is now the defining mode of the present era, with rapid evaluation, big data and AI emerging as tools that operate within it. For young and emerging evaluators, the message is clear: learn this history before reaching for the newest tool. To think outside the box, you first have to know the box.
Start with the evaluation questions, not the toolkit. What matters is not how innovative the methods are, but how well they fit the questions. Familiar tools still work well for familiar questions, for example, whether a programme achieved its results or used its resources well. The problem comes when evaluators try to use the same old tools to answer new kinds of questions. Evaluation practitioners today face situations that did not exist ten years ago, such as people moving because of climate change, communities left behind by digital technology, overlapping crises, and programmes that must continue through elections, wars, and natural disasters. The first act of creativity is to choose the method that fits the question, rather than starting from a fixed toolkit and forcing every evaluation through it.
Agility should be built into the contractual framework of the evaluation, not only into its methodological design. Evaluation frameworks are already becoming more flexible when it comes to methodology (more adaptive inquiry frameworks). Several United Nations agencies and the European Commission now include feedback loops, adjustable questions, and real-time learning in their evaluations. The bigger problem lies elsewhere. Contracts and administrative rules tend to be designed for traditional summative evaluations, with fixed deliverables, set payment schedules, and locked terms of reference. These rules do not fit evaluations that need to adapt as they go. Until the contracts change, evaluators trained in adaptive methods cannot use what they have learned in practice.
Treat foresight as a distinct evaluation skill. Foresight is not the same as adaptive or creative evaluation. Foresight uses data to project trends and to build possible future scenarios. For example, it can show what small-scale farming in Africa might look like if large corporations keep taking over, and how that picture changes if specific policies or conditions shift. With the large amount of data and the analysis tools now available, evaluators can do this kind of forward-looking work for the first time. But foresight has been developed in other disciplines, and evaluators should learn from those disciplines instead of trying to build it from scratch.
Use AI as an evaluation thought partner, with humans always in the loop. AI is useful for handling the large volume of open data that evaluators now have access to. But AI is not a neutral authority. It is trained on data that can be incomplete or biased, and it can argue either side of a question depending on how it is asked. What keeps evaluation trustworthy is the human contribution of values, context, and ethical judgment. Organizations need clear policies on AI use, and evaluators need the habit of questioning AI outputs rather than accepting them.
Ground evaluation innovation locally. Creativity and innovation are not shared equally. Most innovation happens in well-funded institutions and in the Global North. Left unchecked, this widens the gap with local practitioners, civil society organizations, and indigenous communities. Keeping innovation grounded means using local languages, avoiding technical jargon, and recognizing that oral histories, community traditions, and indigenous systems of accountability are already rigorous and innovative. It also means involving young and emerging evaluators in the design and analysis of evaluations, not only in data collection. Innovation that ignores these realities risks worsening existing inequalities.
Put power considerations at the centre of evaluation, and recognize evaluators as agents of change. The dialogue named power as the issue most often left out of conversations about innovation, even though it shapes them. Innovation is not only about tools. It is about who decides what counts as innovative, whose questions are asked, whose knowledge is taken seriously, and who gets to share the findings. Evaluators were encouraged to see themselves as part of the change they measure, not as outside observers. The call to action is practical: let go of old assumptions, stay open and humble in the face of different viewpoints, and treat evaluation as a practice that helps programmes improve, not only one that judges whether they worked.
The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.
The ninth dialogue marks the end of the first season of Future of Evaluation dialogues. The second season will start with the tenth dialogue in July 2026. Meanwhile, registration is open for the Youth in Evaluation Forum 2026, to be held from 19 to 21 May 2026. Learn more
This article was written with AI support with human authors in the lead.




Comments