top of page

Search Results

Asset%252013_edited_edited.png

search results

161 results found with an empty search

  • Newsletter #63

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 5 takeaways: Is evaluation fulfilling its potential to advance global social justice?

    In observance of the World Day of Social Justice, the seventh Future of Evaluation dialogue was convened on 19 February 2026 to explore how evaluation can serve as a mechanism for addressing systemic inequalities. The dialogue called for transforming evaluation from a "technocratic box-ticking" exercise into a profession that advances public accountability by actively centering marginalized voices. A fundamental shift was called for: moving from "power over" toward a "power with" model that considers local communities as equal partners in decision-making. This transformation is presented as essential for ensuring evaluation remains relevant in a world facing a polycrisis of climate change, extreme inequality, and the rapid integration of emerging technologies. In case you missed the conversation, watch the recording. Five quick takeaways from the dialogue Prioritize moral judgment and social relevance through downward accountability. Evaluation education must evolve beyond technical mechanics into a comprehensive education framework that considers evaluation as a vital public service. Success of an evaluation is best measured by social impact and the reflection of local realities rather than donor compliance or technical execution alone. Traditional "upward accountability" should shift toward a model of "downward accountability", where evaluation findings are shared back with communities in accessible ways. The ultimate goal of improving lives is at risk if evaluation findings ignore what the community actually needs, or if evaluators focus more on mastering technical tools than on making fair, ethical judgments.By centering the perspectives of the communities, evaluation is moved beyond a "tick-box" exercise to become a bridge between evidence and real-world change. Transition from "human machines" to values-driven evaluators. As generative AI becomes more prevalent in data processing, the unique value proposition of a human evaluator is found in the ability to apply values, context, and ethical foresight. If evaluators act merely as "human machines" following rigid algorithms, they become replaceable. Instead, the profession must be shifted toward "responsible AI" use and human-centric judgment. Evaluators are not neutral observers; they can be viewed as agents of change who must check their own biases and motives to ensure that data is not stripped of its social context. Institutionalize intergenerational "co-creation" spaces. The future of the evaluation is dependent on bridging the gap between senior professionals and youth. Synergy is built into spaces, where methodological wisdom and "business smarts" are provided by seniors to navigate the market, while young evaluators bring a fresh perspective that challenges established assumptions. Youth should not be relegated to data-collection roles; they must be engaged in designing evaluations and reporting to ensure the profession remains adaptive to current social realities and global trends. Decolonize curricula through diverse knowledge systems. Evaluation education must be decolonized to include indigenous, community-based, and local knowledge systems. It was noted that current evaluation frameworks are often biased toward North American and Western perspectives, leading to the misrepresentation of local cultures. Education is made more inclusive by acknowledging "epistemic diversity"—recognizing that oral histories, storytelling, and spiritual practices are rigorous forms of evidence. By incorporating indigenous methodologies and relational accountability, evaluation training is made more globally relevant and respectful of the people it serves. Move from retrospective reporting to evaluative foresight. As global crises like climate migration and digital exclusion accelerate, evaluation must be moved from looking backward to looking forward. Evaluative foresight is utilized, involving the use of future-focused questions and systems thinking to prevent inequities before they become entrenched. By shifting from "what happened" to "what might happen next”, organizations can be helped to adjust in real-time. A "learning loop" is created, allowing for course correction during implementation and ensuring that a better future is shaped rather than just the past being explained. The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “How can evaluation accelerate rights, justice, action, for all women and girls?” will take place on 10 March 2026. Register This article was written with AI support with human authors in the lead.

  • From control to learning: Institutionalizing evaluation in democratic Mongolia

    By Uugantsetseg Ginchigdorj Former Co-leader, EvalYouth Asia Mongolia, often cited as a "poster child" for democracy among post-Communist societies, stands as a unique case of democratic transition. However, the 21-day "Easy to Resign" protests in 2025, led by youth, signaled that civil society is reclaiming constitutional mechanisms for a more participatory and responsive democratic system . This episode illustrates that democratic consolidation requires more than elections; it requires responsive institutions capable of learning and accountability.   In this context, strengthening the institutional foundations that enable evidence-based learning and accountability becomes essential. This reflects growing attention to the institutionalization of evaluation, the process of embedding evaluation within the legal, social, and professional systems. Institutionalization means that evaluation becomes routine, resourced, credible, and publicly meaningful rather than episodic or donor-driven. To understand Mongolia’s current evaluation landscape, this blog draws on the Evaluation Globe  framework by the Department of Sociology and the academic Center for Evaluation (CEval) at Saarland University , which analyzes the political, social, and professional systems, combined with the insights of the 2025 National Evaluation Capacities Index (INCE) pilot .  The political system: Strong legal mandates, unclear oversight  Following the transition to democracy and market-economy in the 1990s, Mongolia has undertaken multiple efforts to establish and govern monitoring and evaluation (M&E) within the government system. This journey began with the 1996 Parliament Resolution No. 38, which set the policy on government activities and structural reform, followed by the 1999 Government Resolution No. 4, regulating the monitoring and evaluation of administrative bodies and continued subsequent amendments and legislations. Today, Mongolia possesses a robust legal framework for M&E. The Law on Development Policy, Planning and its Management (2020) and the recent Government Resolution No. 43 (2025) mandate M&E across the public sector, which formally regulates evaluation as a distinct function from monitoring. The results of the 2025 INCE pilot in Mongolia reflect this strength. The " Institutional Structure" dimension scored 4.4  out of 10, indicating that institutionalization of the evaluation ecosystem is at a moderate situation. The country has established a regulatory system where the government machinery is active; ministries report against plans and maintain dedicated M&E departments, and executives and decision-makers consume performance data as required by law. However, the system remains heavily centralized. With the Authority for Government Supervision (AGS), the successor to the General Agency for State Inspection, leading these efforts in the absence of a specific National Evaluation Policy or a high-level decision-making body on evaluation, there is a risk that evaluation is perceived merely as a tool for internal administrative control rather than for broad democratic learning. Without safeguards for independence, evaluation can be conflated with supervision rather than learning. The professional system: The missing middle  In the professional system as a sub-system in the Evaluation Globe concept, Mongolia faces a “missing middle”. The INCE pilot revealed a critical gap in the "Evaluation Offer" dimension, specifically a low score of 2.4 for "Training Programmes " , indicating the absence of a formal professional education system to supply evaluators. This weakens the ability and availability to produce independent evaluations and hinders the development of local evaluation practices. Yet, this creates reliance on external expertise and limits the emergence of a locally grounded evaluation profession. The social system: The democratic deficit  In the context of evaluation as a mechanism for government accountability, Mongolia faces a significant challenge. The INCE score for "Multi-agent spaces" was the lowest of all dimensions at 3.25 . Although Mongolia has two Voluntary Organizations for Professional Evaluation (VOPEs), the informal Mongolian Evaluation Network (MEN) and the formal Mongolian Evaluation Association (MEA) and other stakeholders as civil society organizations and international organizations, their integration in state evaluation processes remains limited. This disconnect is illustrated by the protests mentioned above, highlighting unmet demand for participatory evaluation spaces.      The way forward: Gaps to address While Mongolia has made progress, the path from "control" to "doing" and now shifting to "learning", the current landscape shows significant structural and practical gaps. It is insufficient to simply call for professional education or civil society engagement; the multi-level gaps below need to be addressed. 1. The policy gap: A critical gap remains in the absence of a National Evaluation Policy (NEP). While the country possesses laws and resolutions, it lacks a cohesive policy defining the principles of evaluation. Current laws mandate that evaluation occurs, but an NEP is needed to ensure independence and that public interests are prioritized over bureaucratic box-checking. Such a policy should define principles of independence, transparency, ethical standards, stakeholder participation, systematic use of findings, and public disclosure, guiding on how evaluation contributes to decision-making and how evidence is integrated into policy cycles. Though parliamentarians must play a leading role, this high-level strategic vision is currently missing in Mongolia.  2. The governance gap:  While the AGS is currently the government body responsible for implementing M&E across state organizations, relying solely on a single agency to govern the entire system presents a risk to accountability. To ensure true democratic accountability, there is a need for a high-level, multi-stakeholder governance mechanism in the country, perhaps through a high-level body, committee or working group. 3. The utilization gap: Perhaps the most pressing challenge is the disconnect between evaluation findings and policy design at this critical transition moment. Even with the positive step of implementing Resolution No. 43, Regulation on Evaluation, whether the mechanism to ensure that these findings actually alter the next cycle of policy design exists, remains unclear. If the country begins conducting government programme evaluations but fails to use those findings to inform the design of future strategies, the system lacks best evaluation practice. Institutionalization is incomplete if evaluation does not feed back into policy cycles. For Mongolia institutionalizing evaluation is ultimately about deepening democratic governance: ensuring that public institutions not only deliver, but listen, learn, and adapt. Moving from control to learning requires more than mandates; it requires professional capacity, civic engagement, and credible systems that connect evidence to reform. In this sense, evaluation becomes not just a tool of government, but part of democracy’s infrastructure. Uugantsetseg Gonchigdorj is an independent evaluator and consultant with a background in sociology. She specializes in programme evaluation at the intersection of policy, systems, and institutional learning across diverse development sectors. She has contributed to evaluation networks including EvalYouth Asia, EvalYouth Mongolia, and the Mongolian Evaluation Association (2023–2025). Connect with Uugantsetseg on LinkedIn . AI Disclaimer: AI tools were used solely to bring the blog to the required length and to correct grammatical issues. The blog's content, ideas, and narrative were authored by the human writer, not generated by AI. Disclaimer: The content of the blog is the responsibility of the author(s) and does not necessarily reflect the views of Eval4Action co-leaders and partners.

  • Newsletter #62

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 6 takeaways: Are we educating evaluators of every generation, for the future?

    In observance of the International Day of Education, the sixth Future of Evaluation dialogue was convened on 22 January 2026, to examine the evaluation workforce's preparedness for the future. The discussion called for a fundamental shift in the education of evaluators - moving away from the teaching of technical skills alone to fostering evaluators with wise, ethical, and courageous judgment, measuring evaluation utilization through social impact and local relevance, adopting responsible AI, fostering intergenerational co-creation, decolonizing curricula and protecting evaluation as a vital pillar of democracy - to ensure evaluation remains relevant for both public accountability and navigating a complex world.  Seven quick takeaways from the dialogue Prioritize "evaluative literacy" and moral judgment over technical mastery. Evaluation education should move beyond a focus on the technical mechanics of methods and statistics. A modern evaluation curriculum prioritizes a "big picture" approach: teaching evaluators to view their work as a transformative tool rather than just a technical job. Instead of simply learning how to use specific tools or software, evaluators should be trained to ask deeper questions: Why is this being measured? When is the right time to do an evaluation? And who will be most affected by the results? This mindset allows them to look past surface-level facts and combine hard evidence with human values to make fairer, more meaningful judgments.  Measuring evaluation utilization through social relevance.  Evaluation education should focus on understanding stakeholder perspectives and reflecting local realities. This is a prerequisite for relevant evaluations. Evaluation risks losing its intended impact if the findings do not connect with the community's needs or if the reports do not lead to meaningful action. To maintain professional and social relevance, training must prioritize empathy and the "sociology of evaluation”. This ensures that findings are accessible to local actors and serve the common good, moving beyond a "tick-box" exercise to become a bridge between evidence and real-world change. Transition from "AI tool-selling" to "responsible AI". With the explosion of generative AI, the focus of evaluator education must shift from learning specific software to understanding "responsible AI." This involves a "human in the loop" approach, where evaluators are trained to recognize the difference between appropriate and inappropriate AI use. Emerging competencies for the next generation include identifying algorithmic bias, ensuring data privacy for marginalized groups, and maintaining critical thinking to oversee the ethical implications of digital tools. Institutionalize intergenerational "co-creation" spaces. Traditional top-down mentoring is evolving into intergenerational co-creation. These learning spaces bridge the gap between senior professionals, who provide methodological guardrails and wisdom, help young evaluators navigate the job market; and youth, who bring digital native energy, a fresh perspective and help evaluation stay adaptive and reflective of current social realities. This synergy ensures the profession stays resilient.  Decolonize curricula through diverse knowledge systems. Rethinking how evaluators are educated requires integrating indigenous, community-based, and experiential knowledge.  Current evaluation frameworks are often based on  Global North perspectives. Decolonizing evaluation education means acknowledging "epistemic diversity", recognizing that different cultures have different ways of understanding reality and hope. By incorporating relationality and local knowledge, evaluation education becomes more inclusive and globally relevant. Protect the connection between democracy and evaluation. Evaluation does not exist in a vacuum; it is an essential pillar of democracy. Evaluation education should prepare the workforce to operate within "evaluation marketplaces" while simultaneously defending evaluation’s role in public accountability. Especially as international laws and democratic norms weaken globally, evaluators must be trained to navigate political sensitivities and promote evidence-based policymaking as a tool for strengthening democratic values. In case you missed the conversation, catch up with the recording The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “Is evaluation fulfilling its potential to advance global social justice?” will take place on 19 February 2026. Register This article was written with AI support with human authors in the lead.

  • Hybrid intelligence, human-first: AI-driven evaluation with empathy and inclusion

    By Arshee Rizvi Co-leader, EvalYouth India As evaluators, our responsibility is not just to measure progress but to listen with sensitivity. In a world facing urgent challenges — from inequality to climate change — evaluators stand at the frontline of truth-seeking and accountability. Yet in today’s data-flooded world, they need more than methods; they need momentum. Evaluation is more than a dry, routine, technical task — it is an ethical commitment to notice, to listen, and to make visible what often remains hidden. But evaluators are often put under the pressure of vast expectations: analyzing complex data, capturing diverse voices, and responding quickly to decision-makers. Traditional approaches, while valuable, are no longer enough. Momentum is needed to move beyond describing progress, to illuminate meaning. We are swimming in oceans of data yet thirsting for insight. Gigabytes flow daily from surveys, platforms, and monitoring systems. In India, hundreds of government Management Information Systems (MIS) capture thousands of data points regularly for the whole country, but very few process this data and make it presentable through dashboards. Even then, data alone does not guarantee understanding. A spreadsheet can count, but it cannot care. A graph can summarize, but it cannot empathize. Evaluators must balance precision with dignity, numbers with narratives. This is a time to explore new ways of thinking and working. This is where artificial intelligence (AI) steps in — not as a substitute, but as a strength. AI brings speed, scale, and sharpness. It can process complex information, detect patterns, and connect dots that would otherwise be invisible. Yet its real power lies in how evaluators can use it to ensure empathy and nuance. With AI as an ally, evaluators can spend less time drowning in data and more time engaging with people and in contexts. The prospect of AI in evaluation sparks both excitement and caution. Some fear it threatens human judgment; others hail it as a shortcut. The truth lies in balance: machines excel at tasks that overwhelm human bandwidth, while evaluators bring empathy and context. An evaluator may take weeks to find correlations that AI can flag in minutes. AI can shift focus of evaluators from documentors and transcriptors, to facilitators. Thus, the evaluator’s role is elevated, not reduced. Freed from repetitive burdens, they can focus on what AI cannot do: interpreting meaning and asking questions rooted in humanity. In contrast, although AI can process larger volumes of data, without proper representation, consents, contextualisation, this scale can also amplify the inherent bias present in the AI systems. Numbers can measure, but stories give voice, Patterns may signal, yet humans make the choice. Machines may reveal, but only hearts can feel,Together we shape truths that make justice real. This is the promise of hybrid intelligence — human judgment amplified by machine capability. It’s about making evaluation more insightful, inclusive, and human-first. Hybrid intelligence is not futuristic — it is a mindset. Machines offer speed and scale; humans bring empathy, ethics, and the ability to read between the lines. Together, they create evaluations that are rigorous yet relational. Rather than reducing evaluation to numbers or transactions, hybrid intelligence ensures that data is interpreted with meaning and care. It bridges precision with perspective — enabling evaluators to work faster without losing depth, and to reach further without leaving people behind. In my work with governance and grassroots projects in India, I have seen how evaluation often misses what matters most — the resilience, struggles, and stories of people on the ground. Reports capture numbers but miss lived realities. Human-first AI can change this: not replacing judgment but strengthening it, making evaluation more empathetic and inclusive. Sitting with communities, I have learnt that progress is rarely linear. A graph may show improved access to a service, but women describe cultural barriers that still limit choices. A statistic may show rising incomes, but farmers speak of climate anxieties that numbers cannot capture. These realities are often sidelined because evaluators must meet deadlines and manage heavy reporting frameworks. Human-first AI offers a way forward: translating local languages in real time, clustering narratives across interviews, or detecting emerging themes. It creates space for evaluators to focus on depth. AI does not erase sensitivity; it amplifies it. I have seen this in practice while working with communities across rural India, where human-first AI helps bridge the distance between data and lived experience. Speech-to-text models trained in regional languages can process hours of community consultations, not to flatten them into statistics, but to identify patterns of exclusion or resilience in people’s own words. When evaluators feed these insights back into dialogues with the same communities, evaluation becomes a cycle of reflection rather than extraction. I have seen this approach shift conversations: women began naming invisible labor, farmers mapped rainfall memory onto digital dashboards, and local officials started responding to nuance, not just numbers. This, for me, is what human-first AI in evaluation truly means: technology that listens at scale yet honors individuality, that transforms data into empathy, and that gives evaluators the ability to see communities not as datasets, but as partners in meaning-making. Imagine evaluation that moves faster without losing compassion, that reaches further without leaving voices behind, that becomes sharper without being insensitive. This is not distant potential; it is a call to action for evaluation today. The future of evaluation is not an abstract possibility; it is a responsibility unfolding now. What if every evaluation process truly centered the most vulnerable? What if dashboards did not only track progress but also reflected dignity? With hybrid intelligence, we can begin to answer. Compassion and technology need not be opposites — together, they can shape evaluations that inspire trust, inform decisions, and ignite change. The challenge is to keep empathy at the center while embracing innovation. AI cannot feel, but it can free evaluators to feel more — to listen, to understand, and to reflect deeply on human realities. This is the choice before the global evaluation community. The technology is already here — what remains is to decide how to guide it. Will evaluation become another technocratic tool, or will it become a space where technology amplifies humanity? The answer depends on the harmony we create between human judgment and machine capability. If we succeed, evaluation will not only measure progress but embody it. It will remind us that inclusion, dignity, and justice are not afterthoughts to evidence — they are its essence. Hybrid intelligence treats AI as an amplifier of human judgment, not a replacement. When evaluators pair machine-scale patterning with participatory sense-making and ethical guardrails, evaluation becomes faster, fairer, and more trustworthy. It is not just about tools; it is about values. And in that harmony lies the promise of evaluations that truly listen, truly learn, and truly lead toward a more just and sustainable future. Arshee Rizvi is an evaluation and AI practitioner working at the intersection of public policy, governance, and grassroots development. Her work explores human-first, inclusive evaluation systems that combine participatory methods with responsible AI to strengthen evidence use, equity, and ethical decision-making. Connect with Arshee on LinkedIn and on X . AI Disclaimer: AI tools were used solely to bring the blog to the required length and to correct grammatical issues. The blog's content, ideas, and narrative were authored by the human writer, not generated by AI. Disclaimer: The content of the blog is the responsibility of the author(s) and does not necessarily reflect the views of Eval4Action co-leaders and partners.

  • Youth in Evaluation standards: Self-reporting guidelines for 2026

    Eval4Action is calling on organizations to self-report their progress in meaningfully engaging young people in evaluation following the Youth in Evaluation standards . The self-reporting of Youth in Evaluation standards is managed by the EvalYouth Global Network, a co-leader of the Eval4ction campaign.  Through the 2026 self-reporting process, your organization has the opportunity to build on this momentum, share your own successes, and potentially be recognized among the next cohort of Youth in Evaluation champions. Previously in 2024  and 2025 , Youth in Evaluation champions set a high bar for global best practices, strengthening the entire movement to advance meaningful engagement of young people in evaluation. Follow the guidelines below to complete your self-assessment for the 2026 reporting cycle and contribute to the global effort to advance youth in evaluation. The self-assessment should cover activities and initiatives undertaken during 2025. The submission deadline for the 2026 self-assessments is 15 February 2026 . Guidelines for organizations completing their first self-assessment in 2026 If your organization is completing its self-assessment for the first time, follow these seven steps to ensure a thorough and meaningful review of your practices: Find the most relevant standards for your organization. Standards are available for academia, governments, the private sector, international organizations, Voluntary Organizations for Professional Evaluation (VOPEs)/EvalYouth chapters, and youth organizations. Each standard is accompanied by a self-assessment tool. Share and discuss the standards with the leadership/management of your organization. Achieve buy-in and endorsement from your organization’s leadership to ensure the self-assessment is supported and acted upon. Initiate a dialogue within the organization.  Organize a pre-arranged meeting with representatives from each unit or section to discuss current practices in engaging youth in evaluation. Familiarize relevant staff with the Youth in Evaluation standards. Assign a team to undertake the self-assessment. Designate a dedicated team to conduct the review and formulate recommendations to improve organizational practices for advancing the meaningful engagement of youth in evaluation. Conduct your self-assessment using the provided tool. Choose the relevant customized assessment sheet for your organization, which is available for download in two different formats under each standard on this page . Share the self-assessment report. Submit the self-assessment report, including good practices, to contact@eval4action.org  by 15 February 2026 . The report can include the finalized assessment sheet (Excel/Google sheet) together with a slide deck that highlights good practices and progress. Sharing this information facilitates cross-fertilization of knowledge among other organizations. EvalYouth will get back to you if any further information is required for the reported performance.  Guidelines for organizations completing their second or subsequent self-assessment in 2026 These guidelines are for organizations that completed the Youth in Evaluation standards self-assessment in a previous cycle (e.g., in 2025) and are now submitting their subsequent report in 2026. Review your previous self-assessment report.  Begin by reviewing your last report and identify any updates or changes in your reporting. Focus on reporting on 2025 activities and initiatives. While reporting in 2026, focus primarily on activities and initiatives related to youth engagement in evaluation conducted throughout 2025. Remember to consider the long-term validity of certain policies, projects, and resources. For example, if a policy related to youth engagement was reported previously and remains valid for 2025, simply confirm its continued validity and score accordingly. Highlight new initiatives and significant progress. Use the comment section in the sheet to provide details and context about successes, challenges, and any new initiatives or significant progress made since your last assessment. Share the self-assessment report. Submit the self-assessment report, including good practices, to contact@eval4action.org  by 15 February 2026 .

  • Eval4Action in 2025: Year-end newsletter

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 7 takeaways: Is evaluation key to realizing universal human rights?

    On December 10, 2025, Eval4Action marked Human Rights Day with a pivotal Future of Evaluation Dialogue, asking: " Is evaluation key to realizing universal human rights? " The dialogue firmly established that evaluation is not just helpful but essential, even existential, to realizing human rights, especially in times of global setbacks where conflicts, and shrinking civic space are reversing decades of progress. The discussion provided concrete pathways and principles for elevating the human rights-based approach (HRBA) in evaluation by systematically addressing institutional biases and empowering rights-holders. The dialogue also emphasized the critical role of technology and youth in co-creating evaluative evidence, stressing that both must be leveraged responsibly to democratize data and challenge the status quo. Seven quick takeaways from the dialogue Evaluation must be guided by HRBA principles, not just compliance.  Evaluation's primary purpose must be to assess whether people's rights are being respected, protected, and fulfilled, and whether duty-bearers (governments and institutions) are meeting their measurable obligations. This human rights-based approach shifts the focus from simply asking, "Did we deliver services?" to the more transformative question, "Did we advance people's rights?". The principle of transparency, such as making all evaluation findings publicly available, is essential for accountability. Equity requires moving beyond 'Leaving No One Behind' as a slogan.  The imperative to leave no one behind must be deeply embedded in evaluation practice, moving past superficial rhetoric to genuine transformative change. This requires evaluators to address the systemic barriers and structural factors that cause and sustain inequity. It also necessitates shifting the unit of analysis beyond a single project to adopt a longitudinal, ecological view of how interventions collectively improve the lives of the most disadvantaged populations over time. Address power imbalances by making right-holders the primary audience.  The design and implementation of evaluations must actively disrupt the dynamic where evaluation agendas are set by donors or political interests. To transform evidence into a tool of community power, evaluation must be legible, useful, and primarily accountable to the citizens and communities. This involves integrating their lived experiences from the planning phase through the formulation of recommendations and the dissemination of evaluation results. Embrace epistemic humility and diversity in methodology.  A truly transformative evaluation requires a fundamental shift in worldview, demanding epistemic humility from evaluators and funders. This means recognizing that different people think differently about problems and valuing diverse knowledge systems. Evaluators must move beyond methodological debates toward building an ecology of evidence that integrates various tools and respects community context as a measure of evaluation rigor. To truly hear marginalized voices, evaluators need to be more creative and willing to experiment with participatory and inclusive methods to capture different perspectives. Technology must democratize evidence and be used responsibly.  Technology serves as a powerful enabler for HRBA, significantly improving access to information, facilitating inclusive data gathering (e.g., geospatial mapping, mobile surveys), and allowing for earlier detection of human rights issues. However, its use comes with immense responsibility. Evaluators must actively safeguard against the digital divide perpetuating inequality and strictly adhere to data privacy and protection standards, ensuring technology is a tool for democratizing evidence, not just for collecting data faster. Youth must be co-creators, not just data sources.  Youth are essential co-creators in HRBA, bringing passion, creativity, and a deep commitment to social change. Their participation is a matter of both justice  and quality, as their perspectives deepen the understanding of complex change. Youth should be meaningfully engaged throughout the entire evaluation cycle—design, data analysis, and recommendation co-creation—not merely used as sources of data or as a "tick box" exercise. Institutionalize the human rights-based approach for systemic impact.  For evaluation to have a transformative effect, HRBA must be formally institutionalized at the national and organizational levels. The South African example shows that embedding equity and human rights into the national evaluation policy framework ensures evaluation is not an optional technical exercise but a constitutional tool for fairness and accountability. This systemic adoption helps governments correct course and redirect resources towards the historically marginalized. In case you missed the conversation, catch up with the recording The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “Are we educating evaluators of every generation, for the future?” will take place on 22 January 2026. Register This article was written with AI support with human authors in the lead.

  • Feminist evaluation methods in crisis contexts: Contributing to unearth hidden realities

    By Rai Sengupta Evaluation Consultant, UNICEF Evaluation Office This blog draws on findings from a synthesis of feminist evaluation innovations in crisis contexts, conducted as part of the ‘From Insights to Action: Advancing Feminist Evaluation (FE) Innovations in Crisis Contexts’ project, funded by the Feminist Innovations in Monitoring and Evaluation (FIME) Award of the Global Evaluation Initiative (GEI). The author, Rai Sengupta, is one of six global Young Evaluation Entrepreneurs (YEEs) to receive the FIME award. Rai is also an evaluation consultant with the UNICEF Evaluation Office, supporting UNICEF Headquarters in conducting global evaluations of UNICEF’s work in nutrition and health, child protection, climate change, and WASH. The value of feminist evaluations in crisis contexts In humanitarian emergencies, evaluations often focus on quantifiable outputs - meals distributed, shelters erected, or families receiving cash transfers. While useful for rapid reporting, such figures overlook critical issues of fairness and access. They rarely ask whose voices shaped interventions, whose knowledge was privileged, and whose urgent needs slipped through the cracks. These gaps reflect deeper design issues. Conventional evaluation models were built for stable contexts, assuming safety, access, and broad participation. In crises marked by conflict, displacement, epidemics, or disasters, those assumptions collapse. What remains is a partial evidence base shaped by the most visible informants - often men or community leaders with authority - while those most affected, especially women, girls, and disadvantaged groups, are systematically excluded from accounts of what “worked” or “failed.” Feminist evaluation contributes to filling this gap. It challenges the idea of neutrality by recognising that evaluation is inherently political, capable of either reinforcing or disrupting entrenched hierarchies. By centring equity, it values diverse ways of knowing, situates evidence in context, and amplifies silenced voices. In doing so, it redefines what counts as evidence, who generates it, and how it drives justice. In crisis settings - where access, power, and safety determine who speaks - feminist innovations are essential. They turn evaluation from a narrow measurement exercise into a tool for accountability, equity, and change. This blog highlights feminist evaluation methods - including equity-driven sampling, arts-based tools, participatory approaches, and community validation - which surface hidden realities and challenge structural inequities. Feminist sampling approaches: Reaching hidden voices In conflict or displacement contexts, conventional sampling often overlooks parts of the population, leaving some groups less visible. Feminist evaluation reframes sampling as an ethical act, deliberately including those most silenced - such as women heads of household, adolescent girls, or people with disabilities. By embedding equity into recruitment, feminist evaluators expand representation, challenge structural barriers, and make findings more credible and accountable. For example, in evaluating humanitarian programming for refugees in Nigeria, CARE International (2020) employed beneficiary-led snowball sampling to reach otherwise excluded groups. Similarly, during the Central Sahel displacement crisis, UNHCR (2023) used Respondent-Driven Sampling to access networks of women and adolescents who would have otherwise been unreachable. Arts-based and visual methods: Centering self-representation Surveys and structured interviews may falter in emergencies, especially with participants facing trauma or language barriers. Visual and arts-based methods - such as participatory video, photo walks, or drawing exercises - offer safer, less extractive ways for participants to express experiences. Rooted in feminist principles, these techniques redistribute narrative authority, validating emotion and memory as knowledge. In volatile settings, they create culturally sensitive, women- and child-friendly spaces where hidden experiences can surface without fear. When using these methods, key considerations include safeguarding confidentiality, ensuring cultural relevance, and managing ethical concerns around emotional exposure. Notably, during the Zika epidemic in Honduras and Colombia, the International Federation of Red Cross and Red Crescent Societies (2019) used participatory video to enable communities to film and validate their own stories. In UNICEF’s 2024 evaluations of humanitarian programming during Cholera and Cyclone Freddy in Mozambique and Malawi, child-centred simulated recall allowed children to act as primary narrators. Through drawings and guided recollections, children’s perspectives were foregrounded and legitimized as essential evaluation evidence. Participatory tools for defining change: Capturing complex realities Crisis contexts generate complex, non-linear changes that escape narrow quantitative metrics. Feminist evaluation employs participatory tools like Most Significant Change (MSC), Outcome Harvesting (OH), and Outcome Mapping (OM), which ask communities - particularly women and girls - to define what meaningful change looks like. Such methods resist top-down definitions of success and produce layered explanations grounded in lived experience. They recognize that change must be understood on multiple levels: personal, household, communal, and systemic. For instance, during Uganda’s refugee crisis, CARE International (2021) applied MSC to capture women’s stories of adaptation and leadership. During COVID-19 disruptions in the Middle East and Africa, Plan International (2023) drew on MSC to highlight adolescent girls’ own accounts of educational resilience. By enabling women and girls to define and narrate change themselves, these methods captured complex, gendered realities of resilience and adaptation that would have remained inaccessible through conventional evaluation methods. When applying participatory tools in crisis contexts, it is important to consider logistical constraints, security risks, and limited access to affected populations. Effective facilitation requires specially trained personnel capable of gathering data on and interpreting complex outcomes. Additionally, power dynamics, participant trust, and trauma necessitate sensitive, adaptive approaches and strong ethical safeguards to ensure meaningful and respectful engagement. Community validation and feedback loops: Returning knowledge Too often, crisis evaluations extract knowledge without returning it to those who shared it. Feminist evaluation is an evaluation approach that actively embeds reciprocity, ensuring findings are validated collectively and used within communities, not just for external audiences. Feedback loops reinforce accountability, balance power dynamics, and build trust where it is fragile, making communities co-interpreters of evidence rather than passive informants. This requires deliberate efforts to share knowledge transparently and foster reciprocal relationships that promote accountability and equitable power sharing. A relevant example comes from an evaluation during Lebanon’s economic contraction and the COVID-19 pandemic, where Search for Common Ground (2022) ran separate validation workshops with beneficiaries and staff to reduce hierarchies in sense-making. Toward transformational change Each of these methods - innovative sampling, arts-based and visual tools, participatory approaches to defining change, and community validation - demonstrates how feminist principles can fundamentally reshape crisis evaluation. These are not incremental tweaks but deep transformations in how evidence is produced, validated, and used. Embedding feminist principles allows evaluators to confront entrenched inequities, elevate silenced voices, and redistribute power, making evaluation a part of the crisis response.      In volatile environments where structural injustice is laid bare, feminist evaluation contributes to reframing evidence as more than a record of outputs - it becomes a vehicle for accountability, justice, and collective voice. It uncovers truths that conventional methods cannot, challenges whose knowledge counts, and insists that those most affected are central to defining impact. In this way, feminist evaluation transforms the evaluation process into an act of activism, positioning evidence as a driver of systemic change rather than a neutral by-product of humanitarian action. Rai Sengupta is an Evaluation Consultant with UNICEF’s Evaluation Office, supporting global evaluations in health, child protection, and climate-WASH. With 6+ years’ experience evaluating large-scale development programmes, she is a recipient of the Global Evaluation Initiative’s Feminist Innovations in M&E Award and leads work on feminist evaluation in crisis contexts. References Emenogu, A., et al. (2020). Integrated GBV prevention and response to the emergency needs of newly displaced women, men, girls, and boys in Borno State, North-East Nigeria: Mid-term evaluation report. CARE International. Lwanga, M. M., et al. (2021). A lifesaving GBV, women’s leadership, and SRMH support for refugees in Uganda, Arua District, West Nile: Endline evaluation – Final report. CARE International. International Federation of Red Cross and Red Crescent Societies. (2019). Community action on Zika project in Honduras and Colombia: Participatory video evaluation report. Plan International. (2023). MEESA report: An evaluation of adolescent girls and young women’s continued access to education during COVID-19 in the Middle East, East, and Southern Africa (March 2020–March 2021). Podems, D. R. (2010). Feminist evaluation and gender approaches: There’s a difference? Journal of MultiDisciplinary Evaluation, 6(14), 1–17. Voluntas. (2022). Midterm evaluation: Partners for justice – Final report. Search for Common Ground. Seigart, D. (2005). Feminist theory and evaluation. Sielbeck-Bowen, S., Brisolara, S., Seigart, D., Tischler, C., & Whitmore, E. (2002). Exploring feminist evaluation. Harvard Humanitarian Initiative & Brigham and Women’s Physicians Organization. (2023). Evaluation of UNHCR’s response to multiple emergencies in the Central Sahel region: Burkina Faso, Niger, Mali. UNHCR. UNICEF. (2024a). Evaluation of UNICEF’s response to the Level 2 cholera and Cyclone Freddy emergencies in Mozambique. UNICEF. UNICEF. (2024b). Evaluation of UNICEF’s response to the Level 2 cholera and Cyclone Freddy emergencies in Malawi. UNICEF. Women’s Peace and Humanitarian Fund (WPHF). (2024). Final evaluation report: Women’s Peace and Humanitarian Fund 2019–2023. Disclaimer: The content of the blog is the responsibility of the author(s) and does not necessarily reflect the views of Eval4Action co-leaders and partners.

  • Eval4Action Newsletter #60

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 6 takeaways: Is evaluation our compass to a future free from gender-based violence?

    The Fourth Future of Evaluation Dialogue , held on the International Day for the Elimination of Violence Against Women, explored the critical question: "Is evaluation our compass to a future free from gender-based violence?". The dialogue emphasized that evaluation is a crucial tool for both challenging gender stereotypes and holding systems accountable for ending gender-based violence (GBV). The discussion established that while evaluation has often functioned as a 'rear-view mirror,' focusing on retrospective reports, its future role must be real-time and forward-looking. Panelists agreed that evaluation must inform and support strategic action, enabling real-time course correction and policy reform to end GBV. The discussion stressed the power of gender-responsive, intersectional, and ethical evaluation methodologies to not only expose harms and blind spots in evaluation but also to preempt and shift power toward survivors of GBV. The conversation highlighted the imperative to adapt evaluation to emerging challenges, such as technology-facilitated GBV, and to ensure that evaluation findings translate directly into budgets, legislation, and no-harm and rights-affirming policies within institutional set-ups as much as in the national and other contexts. Six quick takeaways from the dialogue Evaluation must shift from a retrospective 'rear-view mirror' to a proactive, forward-looking strategic compass.  It must move beyond focusing solely on past achievements to serve as a continuous learning and steering process that actively searches for systemic solutions and charts new paths. By prioritizing formative and participatory approaches, evaluation can improve the design and implementation of interventions in real-time, allowing for necessary course correction. This proactive stance ensures that evaluation effectively addresses accelerating risks, such as climate stress, migration, new technologies, and economic shocks, that can exacerbate GBV. Gender-responsive evidence is vital to drive policy reform: Decision-makers, including parliamentarians, rely on its evidence to influence national and institutional policy reforms and create effective legislation and frameworks addressing or mitigating GBV. The evaluation results help ensure accountability, holding governments and service providers responsible, to justify the sustained allocation of budgets for survivor services. Accountability requires an 'evaluation use architecture': The responsibility of the evaluator does not conclude with the report's submission. The commitment must be to make evaluation use inevitable by ensuring findings and recommendations travel to decision making tables, translating into actionable strategies, budgets and rights-affirming policies in all contexts. This process requires building a dedicated 'evaluation use architecture'. Adopt ethical, intersectional approaches, and mixed methodologies: Thorough stakeholder mapping and evaluability assessments are key to identify and include the voices of those most marginalized or discriminated against. Methodological approaches must use mixed-methods, with a focus on ethical, and sensitive approaches to remove access barriers, i.e. feminist evaluation. Practical measures must be implemented to address confidentiality and ethical implications, especially when interviewing survivors. New metrics are needed on evaluating technology-facilitated GBV: The rise of online harms (cyber-stalking, doxxing, deepfakes) demands evaluation tools that can speedily measure prevalence using a 12-month recall period. Effectiveness must be measured by outcomes that matter to survivors, such as tracking platform take-down time and repeat victimization. Leverage existing guidance and engagement opportunities: In the wake of limited resources, utilizing established guidance and tools should be prioritized over reinventing the wheel. Entities such as the UN Evaluation Group (UNEG), with UN Women, UNFPA, and UN Human Rights provide resources for integrating gender and human rights into evaluation. Engaging with networks like EvalGender+ and sharing and learning in the thematic Communities of Practice ( EvalforEarth ) helps ensure evaluation is context-specific. In case you missed the conversation, catch up with the recording The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “Is evaluation key to realizing universal human rights?” will take place on 10 December 2025. Learn more This article was written with AI support with human authors in the lead.

  • Eval4Action Newsletter #59

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 7 takeaways: How can evaluation shape a future-fit United Nations?

    The third Eval4Action Future of Evaluation dialogue  honoured the United Nations 80th anniversary by addressing the question: How can evaluation shape a future-fit United Nations? Experts from United Nations agencies and beyond convened to discuss the evaluation's pivotal role in enhancing the effectiveness, efficiency, and agility of the United Nations amidst a complex global landscape marked by polycrises, eroding public trust, and funding challenges. The session emphasised moving beyond evaluation as a mere "report card" to serving as an "uncomfortable mirror" for innovation and transformative organisational change. Panellists stressed the need for future-oriented evaluation methodologies, greater interagency collaboration, and a renewed focus on learning to ensure the United Nations remains relevant, effective and capable of delivering on its mandate.  Seven quick takeaways from the dialogue Embrace evaluation as an "uncomfortable mirror" for innovation:  The evaluation function must serve as an independent mirror to "speak truth to power," by delivering an unbiased perspective, which, while occasionally challenging, supports leadership in making informed and strategic adaptations. While structural independence and established norms of the evaluation function protect this role, evaluation findings must be effectively communicated and packaged to inspire confidence and motivate the United Nations to pursue the innovation needed to address global challenges. Strike a balance between accountability, learning, and foresight:  Evaluation's role requires a fundamental realignment, balancing its emphasis on demonstrating accountability to prioritizing its function as a catalyst for systemic learning and adaptation. Future-fit evaluation must look backward with hindsight, inward with insight, and forward with foresight to guide strategic thinking and adaptive programming in an unpredictable context. Future-proofing requires embracing new methodologies:  To assess the agility and adaptability of the United Nations, evaluation must adopt innovative methods that look forward, not just back. This includes incorporating futures methods into mixed-methods evaluations and leveraging AI, data science, and real-time monitoring data (e.g., geospatial analysis) to achieve efficiency gains and strategic foresight, while always ensuring human judgment and ethical oversight. Reposition evaluation to create space for transformative learning and adaptation in programmes: Evaluation should explicitly encourage and value innovation in programme work. Creating a safe space for candid dialogue on the evaluation findings to inform future programming builds a culture of learning. This helps staff see evaluation not as a mandatory compliance exercise, but as a tool for reflection and positive change, boosting innovation. Strengthen coherence through joint and system-wide evaluation:  Recognizing the interconnected nature of global challenges (e.g., health, gender, climate), interagency and joint evaluations are a vital investment for system-wide coherence and collective intelligence. These exercises strengthen the global evaluation ecosystem, enhance legitimacy, and provide a holistic view of the United Nations' contributions to complex cross-sectoral issues. Measure and make visible the "intangible" impacts:  The United Nations' capacity-strengthening work, which often focuses on intangible impacts like fostering trust or building government capacity, must be made visible to demonstrate value. Methodologies like Outcome Harvesting can be used intentionally, with dedicated time and well-facilitated processes, to capture and report on these non-traditional benefits, enriching the full story of the United Nations' independent contribution. Institutionalize evaluation while mastering soft skills for utility:  Securing a future-fit United Nations requires evaluation to operate on two essential, reinforcing pillars: institutional mandate and professional soft skills. While formal mandates (like a policy-mandated budget and governance mechanisms) provide the structural independence to speak truth to power, the utility  and acceptance  of findings hinge on skillful engagement. This involves mastering soft skills in stakeholder engagement and using continuous participatory approaches to ensure stakeholders own the evaluation recommendations, and willingly internalize the feedback provided by the "uncomfortable mirror". In case you missed the conversation, catch up with the recording The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “ Is evaluation our compass to a future free from gender-based violence? ” will take place on 25 November 2025.   Learn more This article was written with AI support with human authors in the lead.

  • Evaluation education in India: The need for institutional and instructional shifts to foster youth participation

    By Pooja Pandey Manager, Capacity Building, Sambodhi Research and Communications Across the world, evaluation has continued to make its mark not just as a critical tool for assessments but also as a recognized academic discipline evolved from being a tool into a well-recognized academic discipline. Higher education institutes across the globe, particularly in geographies like North America, Europe and Australia, are actively offering dedicated programmes in evaluation studies. These courses are generally offered as a Diploma or a Master’s Programme, demonstrating a diverse array of academic, pedagogical and curricular approaches.  In India, while we witness a widespread use of Monitoring and Evaluation (M&E) as an external tool across the development sector, it is rarely conceptualized or treated as a distinct academic discipline. Currently in India, M&E is ordinarily offered as certificate courses, or online courses, or training programmes/workshops. These are either offered by governmental institutions in India such as the Development Monitoring and Evaluation Office-Niti Ayog, National Labour Institute, etc. or by non-governmental organizations such Sambodhi, Azim Premji University etc. but continue to be provisional or non-systematic. The absence of structured academic or capacity building pathways often compartmentalize evaluation to be used and practiced by specialized researchers or consultants, who may or may not be academically trained in the domain. Crucially, what is seen to be missing is limited conceptualizing and policy push to view evaluative thinking as a skill and a lucrative academic path which could be pursued by young people in India. Teaching evaluation to harness the energy of youth Evaluations amass the ability to ask critical questions, analyze complex information and make informed decisions. Adopting an academic and in-depth perspective on the field may further facilitate a nuanced understanding of evaluation theories, practices and methodologies. Currently, India is the world's youngest nation with nearly 52 percent of its population below the age of 30 years. We are, however, also rapidly nearing the stage where this ‘ demographic dividend’ may begin to continue to drop down. To adequately harness this energy of youth, we would require not just jobs and work opportunities but also an educational system that promotes critical inquiry and reflection.  The higher education system in India is especially known to focus on grades, outcomes, placements and merits, often deprioritizing critical expected outcomes like life skills, social skills, problem solving and community connectedness. In such a case, evaluation promises to be a domain of knowledge that transcends beyond technical knowledge and provides skills that empower young people to think critically, operate independently, participate meaningfully and lead responsibly. When youth begin to take interest in evaluations, they begin to learn to observe policies and programmes and engage closely with the direct beneficiaries. While learning to evaluate, they are not just acquiring robust research techniques but are also becoming active citizens and contributors. They learn how to interpret evidence, question underlying assumptions, structure their thinking and offer solutions - traits which are also critical to the functioning of a vibrant democracy.  Need for an institutional shift: From periphery to the center The National Education Policy 2020 reiterates the importance of critical thinking, interdisciplinary approaches and experience driven learning. These principles directly resonate with evaluation education - yet neither the National Education Policy nor any other policy in India recognize evaluation as a distinct subject or discipline of academic interest. A well-informed institutional push can therefore be critical to bring evaluation to the center. Several examples exist across the globe as useful references. Dedicated policies on evaluation often provide an institutional push for adoption. For instance, South Africa has pioneered the National Evaluation Policy Framework, Sri Lanka also boasts a standalone National Evaluation Policy - promoting a systemic uptake of evaluation. Apart from policies and academic programmes, many countries have also developed instructive models on evaluation. For example, in the United States, the American Evaluation Association (AEA) works closely with universities to develop academic curriculum. Countries like Canada have embedded evaluation courses across other programmes like development and education, substantiated by professional certifications.  India could draw useful reference from these different policies and systems and promote the creation of institutions and programmes that foreground the study of evaluation. In its existing form, and with the larger objective of cultivating evaluative thinking and outlook, the Indian education system stands to benefit immensely from institutionalizing evaluation as both an academic discipline and a professional practice.  Need for an instructional shift: Teaching evaluation differently The way evaluation is introduced and taught to students matters a lot. Owing to the nature of the domain, an excessive focus on just theories and frameworks would de-prioritize the spirit of evaluative critical thinking and practice. The pedagogy of evaluation should embody participation, reflection and adaptation.  Some practical ways to imbibe this could be by encouraging students to work on active evaluation projects with governments or non-governmental organizations, partake in case competitions (such as World Evaluation Case Competition) or get associated with youth-based networks such as EvalYouth - where they are given the opportunity to apply evaluative frameworks in real time. Even within classrooms, focusing on approaches like case-based learning, peer learning and collaborative inquiry may nudge students to design their own evaluation studies, gather holistic feedback and find solutions to improve the programme/interventions. This may also be complemented by integrating the use of Artificial Intelligence (AI) and digital technologies, further nurturing 21 st  century skills and best practices. Notably, such youth-led evaluation projects and participation may build confidence, increase youth agency and ensure that learning is rooted in real-world impact. Moving towards a culture of evaluation In conclusion, teaching evaluation in India is not about producing a cavalry of specialists in the field. Instead, it is about fostering a culture of critical thinking, reflection, accountability amongst the youth. With a strong institutional and instructional push and meaningful youth participation, India may soon be an active contributor of the global momentum in evaluation education. To reiterate, if evaluation becomes a part of how India’s young people think and learn, it could also strengthen the spirit of citizenship and participation. In a world that is changing so rapidly, this longer remains a choice but a necessity. Pooja Pandey works with the Capacity Building vertical at Sambodhi and is a part-time PhD scholar at National Institute of Education Planning and Administration (NIEPA), New Delhi. Her work focuses on education, evaluation, state capacity, and policy engagement. She has previously contributed to national and international initiatives on capacity development, policy support, independent research, and evaluations. Connect with Pooja on LinkedIn and X . Disclaimer: The content of the blog is the responsibility of the author(s) and does not necessarily reflect the views of Eval4Action co-leaders and partners.

  • Eval4Action Newsletter #58

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • Beyond indicators: Transitioning African M&E from donor compliance to people-centered systems

    By Baraka Mfilinge (Vice Chair, EvalYouth Global) & Prof. Deus Ngaruko (Professor of Economics, Open University of Tanzania) Across Africa, monitoring and evaluation (M&E) often conclude with counting indicators for donor reports, viewing communities as data sources rather than decision-makers. This blog advocates a shift: from indicators to institutions, projects to people, and monitoring to mobilization. Using examples from Tanzania, Rwanda, and Ghana, we show how African-led, youth-driven, and culturally grounded M&E can foster public trust, empower communities, and drive transformation toward the Sustainable Development Goals (SDGs). Why Africa needs to go beyond indicators In many parts of Africa, M&E is often viewed as a technical task—encompassing the collection of data, tracking of indicators, and preparation of reports. While this supports accountability, it usually overlooks the human element. Focusing on participation and dialogue ensures that evaluation is not just about measuring progress but also about helping people shape their future. In Africa, M&E systems were primarily shaped by donor and colonial traditions that emphasized compliance over empowerment. Indicators were created mainly for external reports, leaving little room to assist local organizations in implementing changes that are most meaningful to them. This history has led M&E to be viewed as an outside imposition rather than a tool for supporting African-led development. To make meaningful progress on the SDGs, Africa needs a M&E approach that goes beyond just tracking numbers. M&E should be an essential part of a governance and empowerment cycle - enabling institutions to listen, communities to speak, and evidence to guide action. Why current M&E falls short in the African context Ownership is lost. Many M&E systems are still designed to meet donor reporting standards rather than local needs. In many countries, indicators are set externally, so data is collected, but decisions are made elsewhere. “Success” in many programmes still depends on reports submitted, workshops held, or surveys collected. Communities often share information without understanding how it benefits their lives. Local governments usually respond to donor demands rather than using M&E evidence to set their priorities. Metrics are shallow. Too much emphasis is placed on outputs - such as the number of trainings conducted or the number of people reached. It isn’t surprising that Tanzania’s local health sector performance is judged by the revenue collected, rather than improvements in mortality rates. This narrow focus risks creating a culture where meeting numerical targets is valued more than producing meaningful change. Communities experience “data fatigue.” People are surveyed multiple times by different projects, but rarely see results shared or acted upon. A farmer in Kilwa, Tanzania, once complained during an endline evaluation that he had answered the same questions about yields for years but was never told how the information was used. When communities feel like data providers rather than partners, trust in M&E deteriorates. As practitioners, the lesson is clear: to address the issue of data fatigue experienced by communities, indicators should support institutions instead of overriding them. Evidence must be owned, interpreted, and utilized by local actors if it is to retain its long-term value. When communities are regarded as partners in M&E, rather than just data sources, trust is built, and results are more likely to lead to meaningful action. Three shifts Africa needs First, from indicators to institutions M&E should not be treated as a donor box-ticking exercise; it needs to be part of government systems. When evidence informs planning, budgeting, and council work, it enhances governance and decision-making. Rwanda’s Imihigo performance contracts demonstrate how M&E can inform service delivery in real-time. Once institutions own their data, evidence moves from paper to practice. Second, from projects to people Development is more than numbers. Counting activities isn’t enough; what matters is whether lives improve. Participatory methods—such as community scorecards, outcome harvesting, or the Most Significant Change (MSC) technique—enable communities to share their stories and influence decisions. For example ,  an evaluation of an HIV programme in East Africa revealed that service data alone missed critical barriers. During a focus group, a young woman explained to the evaluators that stigma had prevented her from returning to the clinic—a reality often overlooked in routine statistics but vital for designing more effective responses. This shows that people, not indicators, should define success. Third, from monitoring to mobilization M&E matters when findings spark dialogue and action. Too often, reports sit on shelves instead of shaping decisions. In Ghana, citizen-led scorecards on water access created an opportunity for communities, service providers, and local officials to review the results and set priorities together. That process pushed the government to reallocate funds to neglected areas. In Tanzania’s Kilwa District, village meetings that shared results inspired residents to launch their own health campaigns. When M&E is rooted in community life, it stops being passive monitoring and becomes a force for mobilization. Youth as system builders With Africa’s median age under 20, young people can shape M&E in powerful ways. Yet they are often limited to collecting data and excluded from decisions. In reality, young evaluators are innovators and system builders. Recognizing their role is essential if Africa’s M&E systems are to stay relevant. Through platforms like AfrEA, young and emerging evaluators, the Open University of Tanzania’s M&E Alumni Network, and the Global M&E Mentorship WhatsApp Platform design digital dashboards, lead community-based monitoring efforts, and influence policies in real-time. Baraka is a product of this movement. By volunteering, sharing expertise, and supporting one another, young evaluators are developing their skills and growing as professionals. Many now showcase their talents and, more importantly, create real change. For African M&E systems to truly catalyze transformation, the region must move beyond donor compliance and actively build people-centered systems. The primary call to action is clear: invest in intergenerational leadership and ensure local ownership, so that M&E become practical tools for collective action and lasting change. Baraka Leonard Mfilinge is a M&E Specialist from Tanzania and serves as Vice Chair of EvalYouth Global Network and Africa Representative. He is the Director and Managing Partner at Ufanisi Knowledge Hub Consulting and the Founder of VOPME at the Open University of Tanzania. He advocates for practical, youth-led, and impactful evaluation practices across Africa. Connect with Baraka on LinkedIn and X . Prof. Deus D. Ngaruko is a Professor of Development Economics at the Open University of Tanzania and an expert in M&E. He serves as the Director of ACDE-TCC, the Chief Editor of HURIA Journal, and the Chairperson of the Professors Forum. He previously served as Deputy Vice Chancellor from 2016 to 2024. Connect with Prof. Ngaruko on LinkedIn . Disclaimer: The content of the blog is the responsibility of the author(s) and does not necessarily reflect the views of Eval4Action co-leaders and partners.

  • 7 takeaways: How can evaluation be a force for peace and resilience building amidst global instability?

    The second Eval4Action dialogue, honouring the International Day of Peace 2025 explored how evaluation can foster peace and resilience among global instability and humanitarian crises. Panelists Mohib Iqbal, Hur Hassnain, and Kai Brand-Jacobsen, along with moderator Silvia Salinas Mulder, shared their expertise on how to make evaluation a more equitable and effective tool in complex environments. In a powerful moment, Agnes Nyaga of UN Human Rights passed the EvalTorch to the moderator, symbolizing a shared vision and action for the future of evaluation. The session challenged the notion that evaluation's traditional focus on accountability and learning is sufficient, with an audience poll revealing overwhelming agreement that it is not. The conversation explored how to empower evaluation as a peacebuilding tool, not just a way to measure performance. Central to this is expanding adaptive and locally-led evaluation approaches, and putting ethical considerations at the forefront of evaluation practices. Seven quick takeaways from the dialogue Accountability and learning must be actively practiced: Evaluation's traditional roles of accountability and learning are more vital than ever, but they are often not implemented effectively. For evaluation to be a force for peace, we must rigorously hold actors accountable for their actions—or inactions—and create meaningful systems for learning. We need to ensure that evaluation findings are used to improve future peacebuilding efforts. Move beyond micro-intervention level evaluations to systematic, transformative evaluations:  Global issues like conflicts, climate change, and systemic fragility cannot be solved with micro-level, project-focused evaluations. Evaluation must adopt a transformational lens that connects local insights with global systems and vice versa. The unprecedented increase in violent conflicts and the systemic nature of crises demand that evaluators engage with the scale and urgency of the transformation needed, challenging rigid criteria that can miss the nuances of complex contexts. Peace is more than just the absence of violence: True peace is a lifelong journey of dignity, trust, and inclusion - and addressing underlying root causes and preventing or transforming conflicts effectively by peaceful means. Evaluation must go beyond simply counting outputs and activities to assess whether interventions foster these elements across generations. This requires a deep understanding of local and institutional values, norms and culture, and what matters to them the most. Evaluations should not be extractive but participatory, co-owned by communities who can define what peace means to them and use the findings to drive their own change. Harness evidence from diverse sources: Effective evaluation in conflict-affected states requires a broader understanding of evidence. This includes not only cumulative evidence from past studies but also the vital knowledge of practitioners and the deeply-held insights, culture and experiences of affected communities. By drawing from all these sources, evaluators can better understand what works, how, and why, and use this knowledge to inform national and international policies. Strengthen authentic ownership, embedded capacities and locally-led approaches:   The operational model for evaluation and learning in contexts of fragility, conflict, and violence must shift to a local and nationally-led approaches, and to taking serious investment in and supporting capacities and ownership within communities and countries affected. Communities and countries impacted by violence and conflict should be supported to develop effective peacebuilding, conflict transformation and violence prevention capacities - including evaluation as part of this. More work should be done through local and national evaluators and local and nationally-led evaluation processes. The evaluator can also take on more of a facilitative role - to facilitate ‘sense making’ and learning with communities and stakeholders involved.  It is crucial to challenge the notion that "experts" from afar have the answers, and instead, empower local and national capabilities to lead the difficult work of data collection and insight generation. Prioritize ethical responsibility and safeguarding: When working in volatile contexts, no evaluation tool or accountability measure should ever supersede the safety and security of local people. This includes third-party monitors and enumerators who often face the most significant risks. Evaluators must prioritize a duty of care to these individuals. Where relevant, tools like remote sensing can also be used to limit risk. Evaluations in contexts of conflict, fragility and violence should also ensure trauma-informed practice. Going beyond outcomes and impact and towards value for money and use of resources:  In a world where funding is increasingly scarce, evaluation must go beyond assessing outcomes and impact to assess value for money and investment, and identify which interventions and approaches can really achieve meaningful change - strengthening and supporting peace and overcoming instability and violence. Studies have shown that peacebuilding activities, when done well, can yield a significant return on investment—in some cases, as high as $16 for every $1 invested. By evaluating cost-effectiveness, the evaluation community can demonstrate the immense public value of peacebuilding efforts and justify continued investment in them. For a deep dive into the discussion, watch the recording About the #Eval4Action  Future of Evaluation dialogues The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “How can evaluation shape a future-fit United Nations?” will take place on 23 October 2025. Learn more  This article was written with AI support with human authors in the lead.

  • Eval4Action Newsletter #57

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here . As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org .

  • 7 takeaways: Is the intersection of youth, innovation, and influence evaluation's new frontier?

    On 12 August 2025, International Youth Day, #Eval4Action hosted the inaugural Future of Evaluation dialogue , titled " Is the intersection of youth, innovation, and influence evaluation's new frontier? ". The dialogue featured a powerful opening by Ms. Diene Keita, Acting Executive Director of UNFPA. Ms. Keita, along with Ana Erika Lareza, Chair of the EvalYouth Global Network, passed the #EvalTorch to moderator Agnes Nyaga of OHCHR, symbolically inaugurating the dialogue and lighting the way for a conversation on the future of evaluation. The dialogue brought together a diverse panel of leaders and experts from different generations and regions, including Natalia Nikitenko (Global Parliamentarians Forum for Evaluation), Brenda Bucheli (ReLAC), Rachael Okoronkwo (Cloneshouse Nigeria), and Cheng Wang (EvalYouth China). The dialogue confirmed that the intersection of youth, innovation, and influence isn't a future possibility but a present reality for evaluation. The conversation highlighted how young people, as digital natives, are using technology and AI to make evaluation practices more efficient, accessible, and inclusive. The panel also emphasized the crucial role of intergenerational collaboration, in blending the experience of established evaluators with the innovation of youth. While acknowledging existing challenges like limited resources for youth participation in evaluation and the need to move beyond tokenism, the dialogue affirmed that young people, especially in the Global South, are already using evaluation as a powerful tool for social change and activism. The dialogue concluded with a strong call to action, urging the creation of more opportunities, mentorship programmes, and open spaces for young people to contribute their voices and perspectives, ultimately future-proofing evaluation. Seven quick takeaways from the dialogue Evaluation is a tool for transformation, not just a technical exercise.   The dialogue emphasized that when young people are meaningfully engaged, evaluation moves beyond a mere technical function. Evaluation was described as a "tool for transformation" that sharpens focus on accelerating progress towards Sustainable Development Goals (SDGs). The discussion highlighted how youth are using evaluation to advance advocacy for institutional strengthening and stronger inclusion.  Youth energy is the catalyst for future-ready evaluation.  The participants celebrated International Youth Day by recognizing the energy, creativity, and fresh perspectives that young people bring to the field of evaluation. The conversation stressed that youth energy is essential for evaluation to adapt to the "poly-crisis" and rapid global changes. The discussion framed young people not just as participants but as innovative leaders who are resilient, adaptive, and are committed to the achievement of the SDGs.  Many young people are digitally fluent including in AI and are reshaping evaluation practices. The conversation frequently touched on how digital tools and artificial intelligence (AI) are driving the rapid evolution of evaluation practices. It was noted that young people, as "digital natives," bring technological skills that optimize work processes and give them more relevance in evaluation teams. New technologies were highlighted as a way to dramatically decrease the cost and timeframe of evaluations, making them more accessible and timely.  Intergenerational collaboration is key to bridging the experience gap in evaluation.  The dialogue emphasized the importance of collaboration between experienced and young evaluators. The EvalYouth Mentoring Programme was cited as a successful example of this, fostering connections and knowledge transfer between generations. The dialogue stressed the importance of combining the innovative approaches of new technologies with the seasoned interpretations of experienced evaluation professionals. This collaboration is seen as a way to ensure that evaluation grows into a profession where intergenerational dialogue and alliance is the norm. A broader and interdisciplinary understanding of evaluation is needed to increase its influence.  A more expansive and interdisciplinary definition of evaluation allows for more utilization and influence of evaluation practices. Evaluative thinking can be applied in various phases of development initiatives, making it more integrated into professional development. This perspective suggests that evaluation is not an isolated exercise but a continuous process that empowers professionals including youth with skills, applicable to multiple roles. Youth in the Global South face unique challenges but are pioneering innovative strategies in evaluation.  Often young people in the Global South face significant challenges in evaluation such as limited access to reliable data, lack of financial resources and technical skills, as well as complex political and social environments. Given these limitations, there are several examples of young people in the Global South, demonstrating resilience and innovation when conducting evaluations. For example, using mobile technology and social media for data collection and dissemination, forming grassroots networks to drive accountability, among others.  The future of evaluation requires a systemic approach to youth engagement.  While the dialogue was optimistic, the conversation acknowledged that meaningful youth engagement is a work in progress. Young people often assume more operational roles in evaluation, such as data collection, rather than strategic roles in decision-making. To overcome this, systemic changes are needed. It is also important to advocate for policies that promote educational opportunities in evaluation for youth, and stronger partnerships between youth movements and local evaluation actors. These actions will help ensure that youth are not just included for "tokenism" but are genuinely empowered to lead and shape the future of evaluation. For a deep dive into the discussion, watch the recording About the #Eval4Action Future of Evaluation dialogues The Eval4Action Future of Evaluation dialogues are a series of forward-looking discussions that explore innovative and adaptive approaches to evaluation. Designed to make evaluation more influential in a rapidly changing and complex world, these dialogues bring together a diverse range of voices—from experts to young evaluators—to share knowledge and highlight ways to future-proof the field of evaluation. Each monthly dialogue is aligned with an international action day, ensuring the conversations are timely and relevant to a global discourse.  The next dialogue, “How can evaluation be a force for peace and resilience building amidst global instability?” will take place on 23 September 2025. Learn more   This article was written with AI support with human authors in the lead.

© 2026 Eval4Action. All rights reserved

Privacy Policy

Disclaimer: Being accepted as a partner of the Eval4Action campaign does not constitute or imply in any way, endorsement or recommendation of the partner by the co-leading organizations. The views and opinions expressed by Eval4Action partners in documents, blogs, videos, website and other media are those of the partners and do not necessarily reflect the official policy or position of Eval4Action co-leaders and other partners. The designations in this web site do not imply an opinion on the legal status of any country or territory, or of its authorities, or the delimitations of frontiers.  

bottom of page