top of page

Search Results

Asset%252013_edited_edited.png

search results

109 items found for ""

  • Eval4Action Newsletter #41

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Girl voice and meaningful participation: intersections and implications for evaluation

    By Sarah Dickins Girlguiding UK Girlguiding is the UK’s largest youth organisation dedicated completely to girls, with around 370,000 members. We help girls know they can do anything, whether they’re 4 or 18 or in between. We show them a world of possibilities, big and small. We help them think big and be bold in a space where they can be themselves, get creative, explore, and have fun. We’re a powerful collective voice – with girls, led by girls – changing the world for the better. At Girlguiding, our commitment to amplifying girls’ voices shapes how we work. Our advocacy work is led by the advocates, a panel of young members aged 14 to 25 who act as spokespeople for Girlguiding, talking to UK Members of Parliament and other changemakers on issues that affect girls. Our youth steering group, Amplify, feeds back on internal work and processes, making sure girls’ experiences and preferences are heard at the highest levels of our organisation. But what are the implications for monitoring and evaluation? How can we understand meaningful participation in the context of girl voice? And how can girl voice enhance and innovate evaluation processes? What’s meaningful about participation? ‘Meaningful participation’ has become a widespread term since the popularisation of participatory evaluation approaches in the 1990s. Perhaps more than other types of evaluation, this term encompasses a range of understandings, experiences and techniques. Nuanced, context-sensitive and flexible by definition, there are almost as many definitions of meaningful participation as there are participatory evaluations themselves. UNICEF’s Methodological Brief on participatory approaches notes, for example, just a few of the areas in which meaningful participation can differ from context to context, including “a wide range of different types of participation, which differ in terms of what is understood by ‘participation’, whose participation is wanted, and what it is that those people are involved in and how”. For the purposes of this blog and Girlguiding’s work, however, it’s helpful to think of ‘meaningful participation’ as having two core parts: a commitment to stakeholder participation at one or more phases of an initiative or project, and a need for this participation to support the development, implementation and/or learning of an initiative or project in a genuine and purposeful way. Meaningful participation and girl voice Meaningful participation also has a particular history in the context of youth-centred projects. In his 1992 article, ‘Children’s Participation: From tokenism to citizenship’, Roger Hart applied Sherry Arnstein’s Ladder of Citizen Participation to children. The principle of this ladder is simple: if people have more opportunities to participate in processes that affect them, they are more empowered to make decisions and shape a more equal future for their communities. Hart takes this ladder metaphor further, suggesting there are increasing degrees of ‘true’ participation. As our programming and evaluation become more participatory, we see an increasing shift from adult to child leadership, direction and ownership in decision-making. Meaningful girl-centred evaluation involves investing in girls’ leadership in strategic planning and evaluation processes. So what does this mean in the context of girl and young women-centred initiatives? Whilst the principles of Hart’s framework are still relevant, there are additional considerations. Intersectional feminism argues that we interact with global power structures differently based on our unique combined experiences of gender, age, ethnicity, disability, class, religion and other factors. Using this lens, we can see that girls and young women face the overlapping challenges of being female and being young – as well as their many and varied experiences of discrimination based on ethnicity, disability and socioeconomic deprivation. And Girlguiding’s research suggests that girls’ experiences are getting worse. Our 2023 Girls’ Attitudes Survey reveals girls’ happiness levels have significantly declined over the past 15 years, with only 17% of girls aged 7-21 stating they feel very happy, compared to 40% in 2009. At Girlguiding, we try to make our girl-centred evaluations sensitive to this context. This includes evaluation practices, such as cross-disaggregating data by gender, age, ethnicity and disability; and promoting ‘brave spaces’ in focus groups, workshops and interviews, where girls are encouraged to challenge, innovate and co-create evaluation processes in a psychologically safe and confidential environment. Importantly, too, meaningful girl-centred evaluation involves investing in girls’ leadership in strategic planning and evaluation processes. One example of this work is in the development of our 2020+ Strategy, which consulted with over 50,000 girls, young women, volunteers, parents and carers, and staff. As part of this process, Girlguiding developed and delivered participatory workshops with over 1250 girls aged 5-18. Outcome mapping activities were ‘gamified’ in age-appropriate ways, for example, girls in the Rainbows section (ages 5-7) and the Brownies section (ages 7-10) were asked to help a fictitious ‘Cecil the snake’ find her colourful stripes, by identifying the things that make Girlguiding unique and special. Girls aged 10-18 in the older sections, Guides and Rangers, explored outcomes and areas of improvement by workshopping what the values and principles of a fantasy future Girlguiding might be. In both cases, data and lessons learnt from these participatory workshops has been used to shape Girlguiding’s subsequent evaluation agenda and frameworks, including our organisational theory of change and flagship 2023 impact report – which highlighted that Girlguiding girls are up to 23% more confident than UK girls not in guiding. The importance of empowering girls to lead and shape programmes was also highlighted in this consultation, contributing to the creation of Amplify, our youth steering group, who not only provide youth leadership in our organisational governance, but also deliver their own monitoring and evaluation of youth-led governance through self- and group reflection. Integrating girl voice into evaluation: task, timing and tone So, finally, how can girl voice be successfully integrated into participatory evaluation? At the end of 2022, we asked Amplify what made girls’ participation in focus groups more participatory and engaging. They gave a range of ideas, which can be summarised as the ‘task’, ‘timing’ and ‘tone’ of evaluation. First, meaningful girls’ participation in evaluation needs a suitable task. The key to meaningful participation is that it’s purposeful: you may need to hear from girls to make the evaluation more accurate, to empower girls further through evaluation, to build long-term relationships or to fulfil our strategic commitments. Whatever your reasoning, you need a clear idea of why you want to involve girls in your evaluations and, therefore, who it’s best to involve. This means being intentionally inclusive in inviting and enabling girls from a range of backgrounds, especially those who are most marginalised or those who are most affected by any issues your evaluation addresses. Second, meaningful girls’ participation needs to be well-timed. This principle is about working with girls to find appropriate moments for them to participate in evaluations. This means respecting that girls and young women often have many competing priorities for their time and energy, as well as thinking fully about the various and iterative stages of evaluation that girls can be meaningfully involved in, from design, to data collection, analysis and socialisation of findings. Thirdly, meaningful girls’ participation needs to be given an empowering tone. Adult facilitators should use respectful, non-patronising language throughout, both minimising jargon and explaining relevant technical concepts in age and context-appropriate ways. When reporting findings, relay the girls’ thoughts respectfully and, where appropriate, using the original terms and phrasing, as these may have been carefully chosen by the girl to convey their perspective. And finally, it’s always important to give credit where credit is due, acknowledging and celebrating where girls have contributed to your evaluation design, process and findings. In this way, meaningful girl participation benefits both evaluation and the girls themselves. It not only provides more accurate, creative and complete findings, it also builds long-term, respectful relationships and enables Girlguiding’s mission: to help girls know they can do anything. Sarah Dickins is a monitoring, evaluation and learning specialist who has spent the last decade working with girls and other young people around the world. She’s passionate about how participatory evaluation can empower communities. At Girlguiding UK, she delivers the Insight team’s longitudinal quasi-experimental impact study. Connect with Sarah via LinkedIn.

  • Eval4Action Newsletter #40

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Guidelines for self-assessment of Youth in Evaluation standards

    Are you ready to self-assess your organization’s alignment with the Youth in Evaluation standards? 1. Follow the self-assessment guidelines available below 2. Self-assess your organization by 31 March 3. Share the report with good practices 4. Get recognized as a champion at Youth in Evaluation week 2024! Guidelines Step 1: Find the most relevant standards for your organization. Standards are available here for academia, governments, the private sector, international organizations, Voluntary Organizations for Professional Evaluation and youth organizations. Each standard is accompanied by a self-assessment tool. Step 2: Share and discuss the standards with the leadership/management of your organization for buy-in and endorsement. Step 3: Raise awareness among colleagues in your organization on the Youth in Evaluation standards. If your organization needs assistance in identifying external resource persons for an awareness raising event, reach out to Eval4Action campaign. Step 4: Initiate a dialogue within the organization on current practices in engaging youth in evaluation. A 2-3 hours pre-arranged meeting with representatives from each unit or section in the organization would be helpful. Step 5: Assign a team to undertake the self-assessment and make recommendations to improve organizational practices to advance the meaningful engagement of youth in evaluation. Step 6: Conduct your self-assessment using the provided tool. Customized self-assessment forms are available with a downloadable link next to each standard on the website. Choose the relevant assessment form for your organization. Step 7: Share the self-assessment report including good practices with contact@eval4action.org by 31 March. The report can include the finalized assessment form (Excel sheet) together with a slide deck that highlights good practices and progress on various dimensions. Sharing this information will facilitate cross-fertilization of knowledge among other organizations. Opportunity to be facilitated as a champion at Youth in Evaluation week 2024 The champions from each stakeholder group will be selected based on the self-assessment reports submitted by 31 March. The selected champions will be announced at the Youth in Evaluation week to be held from 8-12 July 2024. For any further information or support in the self-assessment process, please reach out to contact@eval4action.org

  • Eval4Action Newsletter #39

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • ‘I think after 30 years in development, I know what works’ How can evaluators better engage with development professionals?

    By Tom Ling European Evaluation Society Years ago, I was talking to an experienced development professional about integrated nutrition programmes. I observed that the approach taken by their organisation differed from the approach taken by another leading organisation. I was curious to understand why these approaches differed, and asked if evaluation could identify good (or even better) practice. The answer was polite enough but disturbing; ‘I think, Tom, after 30 years in international development, I know what works’. The logic was implicit but clear; ‘I don’t need evaluators to tell me how to do my job’. To be honest, I have sympathy for this view. I believe the evaluation community should be more helpful. Here I suggest two ways we might be more useful. First, by putting complex programmes much more clearly in their social context and, second, by co-producing responses alongside practitioners and other decision makers. Decision makers have not always been well served by what is produced by the evaluation ecosystem. [1] What help does this ecosystem offer our professionals from the previous paragraph? In the case of integrated nutrition programmes, she might think it is not at all helpful. Undernutrition contributes globally to 45 per cent of preventable deaths in children under 5 [2], but the evaluation community is a long way from providing coherent evidence to deliver better care for children. Some years after the conversation mentioned above, a systematic review reinforced the concerns of our practitioner: There is substantial evidence of positive nutrition outcomes resulting from integrating nutrition-specific interventions into nutrition specific programmes. However, there is paucity of knowledge on establishing and sustaining effective integration of nutrition intervention in fragile context. [3] ‘What works in what context’ is a helpful starting point, and developing middle-range theories based on understanding the programme theory of change and its context is an important part of evaluation practice. However, the application of this mantra, and its relevance to highly complex programmes in ‘rugged’ operating environments [4] has multiple problems. These problems start, I believe, from designing the evaluation based on a narrow theory of change while asking: ‘based on this individual evaluation, what can we say about what works in what context?’ This immediately sets us off on the wrong foot. The importance of context was highlighted in the important work of Pawson and Tilley. [5] However, it is not clear that evaluators have taken ‘context’ sufficiently seriously. Most often, a complex programme is a very small event in a very large system. Where the programme bears fruit and delivers benefits, it is because of how it lands in, and works with, this system. The primary causal driver is often not the programme but the social systems it is part of and contributes to. In these complex circumstances, programmes will often have the following characteristics (see Woolcock, 2022 [6]): Implementing practitioners have considerable discretion when delivering the programme; ‘Success’ depends upon multiple transactions and negotiations across different individuals and organisations; The aims, resources and imperatives of the programme are only part of what drives behaviours, including behaviours of the intended beneficiaries; and Intended beneficiaries are not defined by the programme but they have agency which they use in ways which may have nothing to do with the programme (and individual-level behavioural theory may be especially unhelpful in this context). Addressing this involves putting human agency more firmly at the centre of evaluation. It is people who make things happen and not programmes. But, although it is people that drive change, people do not choose their social circumstances. [7] These circumstances include (among other things) the unequal distribution of resources and power. [8] These circumstances are not immutable, but they account for the observed patterning of social life. We need to draw more heavily on social science (among other sciences) to bring this patterning into evaluations. So where does this leave our development professional who we met in the first paragraph?. To be helpful, evaluations should relate to practitioner experience in three ways: The theory of change should include a deep understanding of the social circumstances which led to the problems arising in the first place and thwarted previous efforts, The analysis should use social science and resist over-individualising behavioural explanations without ignoring the importance of human agency, and We should understand programmes as small events in large systems and attribute causality to social processes and not to programme logics. And would our practitioner be satisfied with this? Well, to some extent but perhaps not entirely. The next piece of the ‘what works in what context’ jigsaw should involve paying much more attention to building scientific knowledge over time. We should move away from only asking ‘did the programme work’ and towards also asking ’what have we learned about how better to deliver the Sustainable Development Goals (SDGs) and the other pressing challenges of our age?’ Development professionals should be part of a learning system that uses evaluation to help answer questions that they think are important, using evidence that contributes to better informed judgements and decisions. Finally, even if we engage more fully with understanding the complex environments of international development, evaluators should, I believe, take more responsibility for collaborating with practitioners. Over two decades ago, Ziman raised the question of how professionals from a scientific background could communicate better with decision makers in politics and law. I would extend this to include how evaluators (especially those seeking to draw more heavily on social science) should engage with practitioners in international development. Ziman [9] says: Scientists who are only accustomed to the scientific mode of disputation are not well prepared for the debating rituals of transcientific controversies. They bring into the proceedings the scientific expertise and presentational skills which have stood them well professionally and find that these do not work as usual. That is to say, their accustomed rhetorical style, shaped and refined in purely scientific arenas, just does not succeed … Our aim as evaluators should be to respond positively to the ‘different rhetorical styles’ of practitioners (including, I should add, a greater willingness to challenge some of these rhetorical styles) so that practitioners (and policy makers) have more confidence that evaluations can help them do their jobs better. For this to succeed we need to redesign the evaluation ecosystem, including reconsidering how we frame problems, design evaluations, include young and emerging evaluators and engage their energy and creativity, how we conduct evaluations, and how we communicate and make sense of our findings. Contributing to this is where Eval4Action and other leading parts of our evaluation community add great value. We need to reach out to the funders and users of our evaluations as part of this redesign. In this, we need to be less mesmerised by the need to be independent, and more concerned with how we contribute to turning around our stalled SDGs and build a just transition to a better future. Editor's note: This blog was written during Tom Ling’s tenure as the President of the European Evaluation Society. Tom Ling has over 30 years of experience in designing, managing, and delivering complex evaluations focused on innovation, impact and quality. His clients have included UK Government Departments and agencies, the European Commission, UNDP, OECD, the World Bank, and many others. He is a senior research leader at RAND Europe and head of evaluation. In addition to his current role at RAND Europe, Tom has worked as head of evaluation at Save the Children, a senior research fellow and the National Audit Office and held various academic posts including Professor Emeritus at Anglia Ruskin University. He is the former President of the European Evaluation Society and an advisor to the World Bank’s Global Evaluation Initiative. Tom can be reached via LinkedIn and email at tling@randeurope.org. _ [1] The term ‘evaluation eco-system refers to the inter-locking processes through which evaluation needs are identified, evaluations are commissioned, suitable evaluation providers identified, proposals submitted, evaluations conducted, and evaluation results published and used. [2]  Abdullahi, L.H., Rithaa, G.K., Muthomi, B. et al. ‘Best practices and opportunities for integrating nutrition specific into nutrition sensitive interventions in fragile contexts: a systematic review.’ BMC Nutr 7, 46 (2021). https://doi.org/10.1186/s40795-021-00443-1 [3] Ibid. [4] A ‘rugged’ environment is one which is highly variable, unpredictable, and means that we cannot easily transfer lessons about best practice from one context to another. See: Pritchett, L., Samji, S., and Hammer, J. (2012) ‘It’s all about MeE: Using structured experiential learning (‘e’) to crawl the design space.’ Helsinki: UNU-WIDER Working Paper No. 2012/104 [5] R. Pawson, N. Tilley Realistic Evaluation Sage, London (1997) [6] Woolcock, M. (2022). ‘Will It Work Here? Using Case Studies to Generate ‘Key Facts’ About Complex Development Programs.’ In J. Widner, M. Woolcock, & D. Ortega Nieto (Eds.), The Case for Case Studies: Methods and Applications in International Development (Strategies for Social Inquiry, pp. 87-116). Cambridge: Cambridge University Press. doi:10.1017/9781108688253.006 [7] Or as Marx noted in the 18th Brumaire of Louis Bonaparte: "Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly encountered, given and transmitted from the past.” [8] Bourdieu, P. (1984). Distinction: A Social Critique of the Judgement of Taste. London, Routledge. [9] Ziman J. (2000) ‘Are debatable scientific questions debatable?’ Social Epistemology, 2000, vol. 14, nos. 2 3, 187–199.

  • Eval4Action in 2023: Year-End Newsletter

    The year-end newsletter showcases Eval4Action's progress and achievements in 2023. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • The Participatory Evaluation model: a sustainable approach for the achievement of the 2030 Agenda

    By Raquel Herrera EvalYouth Costa Rica & ReLAC This blog delves into the basic concepts and methodology of the Participatory Evaluation model, with the hope that it will lead to an understanding of why this model is a masterstroke for governments in achieving the Sustainable Development Goals (SDGs). This model comes from a theoretical background based on the Constructivist and Transformative Paradigms (Mertens, 2005), emphasizing how individuals construct their own understanding of reality through their experiences and specific contexts (i.e., geographical, cultural, social, political, and economic). Acknowledging these multiple perspectives and the nuances in the social construction of knowledge opens the possibility of reflexivity and collaboration to recognize social injustices and power imbalances. This reflexivity through communal conversations allows stakeholders to build more sophisticated and collective knowledge about specific and general subjects in languages and spaces close to the people involved. The manual, Sowing and Harvesting by Tapella, Rodriguez, Sanz, Chaves, and Espinoza (2021), available on the EvalParticipativa website, is a fundamental text for those embarking on Participatory Evaluation. It mentions that this model aims for the involvement of all stakeholders in all the different stages of the evaluation, starting with the methodological design of the evaluation until the dissemination of the results. The SDGs aim to address global challenges and achieve a sustainable and equitable future, “leaving no one behind”. The ideas mentioned above align with Agenda 2030 for Sustainable Development, as it explicitly states that “74. Follow-up and review processes at all levels will be guided by the following principles: d. They will be open, inclusive, participatory, and transparent for all people and will support the reporting by all relevant stakeholders.” The best way to ensure community engagement in achieving the SDGs is to empower the people to have conversations about their issues and promote their involvement in all the stages of the evaluation process. These conversations among and within stakeholders will bring visibility to the main topics and other subtle issues around each SDG. The community ownership of the interventions, involving a participatory approach to decision-making, where community members have a say in how resources are used and in the planning of projects that benefit everyone, will be encouraged by the sense of belonging and involvement in developing strategies for solving local problems. After all, we all want to live better. The best way to ensure community engagement in achieving the SDGs is to empower the people to have conversations about their issues and promote their involvement in all the stages of the evaluation process. In order to open participatory spaces for all stakeholders, the evaluation of SDGs must be coherent with equity criteria, local contexts, and local knowledge. Among the many benefits of this model is that stakeholders can tailor each intervention to meet their needs. Taking ownership of interventions in the communities makes them sustainable primarily through institutional support for initiatives based on community needs. This means that sustainability is not determined by the efforts of public management but rather by the interests, contexts, cultures, and territories of the people involved. Because of the local interpretation of the problems and the corresponding solutions, Participatory Evaluation can be used to address each SDG in a way that conforms to each community's point of view. As a result of the community's involvement in the evaluation, the stakeholders develop new capabilities and strengthen the ones they already have, promoting the continuity of the processes because now they have the know-how and the confidence to run a process by themselves. These capabilities remain in the communities, and, as has been seen in practice, these participatory evaluations have an even greater reach by creating local networks with other communities or social groups. Now, the question is, how can this be done? A peer work process is an effective method for navigating the methodological framework of an evaluation. In participatory evaluations, the evaluator's role takes on a facilitator's nuance. This means that stakeholders decide together what to evaluate, when to assess, and how to evaluate, as well as to analyze the data and communicate the results. To begin, we can mention seven principles of Participatory Evaluations (Tapella et al., 2021) that have been collectively constructed by a group of Latin American evaluators involved in evaluations of this type. The principles are: 1. The stakeholders relevant to the intervention or situation being evaluated are actively and consciously incorporated into the evaluation process as subjects of rights. 2. Local knowledge is recognized as valid and essential for evaluation. 3. Institutional representatives work in partnership with local stakeholders in the design, implementation, and interpretation of the evaluation findings. 4. The use of didactic techniques and materials facilitates dialogue by generating spaces and procedures for the collection, analysis, and use of information. 5. The participating stakeholders are accountable for both evaluation processes and results. 6. The evaluation process strengthens local planning and decision-making skills. 7. External evaluators act as facilitators of the evaluation process. At this point, it is important to highlight two ideas: 1. The role of the evaluator is more of a facilitator. As part of their most fundamental work, evaluators must understand the stakeholders’ context as it determines the calls for participation, the use and application of research instruments, and their systematization. 2. A greater amount of responsibility and foresight is fundamental in the application of the research and evaluation instruments. While it's true that this model demands a longer development period and relies on the available resources for evaluation, it serves as a critical means to ensure the sustainability of actions aimed at achieving the Sustainable Development Goals (SDGs). I believe that Participatory Evaluation makes a significant contribution to restoring citizenship, which has been taken from people in many ways. In the present day, political discourse is primarily focused on representative democracy, which, in theory and its purest form, should reflect the visions and voices of the communities it represents. However, in many cases, it provides less opportunity for active citizen participation. The participatory model, on the other hand, empowers people and promotes an environment for active listening between stakeholders. In addition, if the public makes its voice heard, legislators will be required to act on its behalf. For greater outcomes regarding the SDGs, the Participatory Evaluation model promotes that the achievement of these goals is not so much a matter of political will or ideological positions but rather a cultural change in the way life is understood. This means, understanding life as an integral space of equity, participation and listening, in which the knowledge of all living and non-living beings, as well as the environment as a whole, is taken into consideration. The participatory approach is a historical debt that governments owe to the public and the environment, whether designing public policies or conducting evaluations. Democratic societies must guarantee communal participatory spaces and secure the link between the institutional ecosystem and society. References Tapella, E., Rodríguez, P., Sanz, J., Chavez, J., Espinosa, J. (2021). Manual Siembra y Cosecha. Instituto Alemán de Evaluación de la Cooperación para el Desarrollo (DEval). Mertens, D. M. (2005). Research and Evaluation in Education and-Psychology. Sage press. Raquel Herrera is an emerging evaluator from Costa Rica, holding an undergraduate degree in Social Communication and a Master's degree in Evaluation of Developmental Programs and Projects from the University of Costa Rica. Currently, she is the Chair of the national EvalYouth Costa Rica chapter and a member of the current Executive Committee of ReLAC, the Network for Monitoring, Evaluation, and Systematization of Latin America and the Caribbean. Raquel's work primarily revolves around Participatory, Indigenous, and Decolonizing Evaluation, reflecting her keen interest and extensive involvement in these areas. Raquel can reached at raquel.herrer@gmail.com.

  • Eval4Action Newsletter #38

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • The role of EvalYouth in shaping evaluation governance: overcoming challenges

    By Miriam Ordoñez EvalYouth LAC Summary When faced with the complexity of evaluating progress towards the Sustainable Development Goals, governance is the best way to mobilize the participation, resources, and knowledge of multiple stakeholders. EvalYouth chapters bring together young and emerging evaluators (YEEs) to promote their own professional development. As such, through their voluntary action, they have a more relevant role than is commonly recognized in shaping the governance of the evaluation ecosystem. However, encouraging the participation and self-organization of YEEs is a goal that should not be underestimated, as it entails multiple challenges. Understanding the personal motivations of YEEs is key to this end, but so is a democratic context that promotes their participation. In 2017, I joined a voluntary organization of professional evaluators (VOPEs) for the first time. I was just starting out in the evaluation field and hoping to make connections with the evaluation community in Mexico. Almost immediately after I became a member of the VOPE, the organization’s leaders decided to manage the volunteer work around various thematic initiatives, including EvalYouth. Together with other young evaluators who, like me, were interested in strengthening our capacities, we founded EvalYouthMx [1]. A couple of years later I found myself leading the EvalYouthLAC initiative, and now it is clear to me that VOPEs and initiatives are part of a far-reaching institutional and normative architecture. Instigated by the designation of 2015 as the ‘Year of Evaluation’, and the setting of the first Global Evaluation Agenda to raise the voices of evaluators in the same year, participation in evaluation and the use of evaluation results to identify key focus areas to achieve the desired goals has been growing in prominence within this architecture. The Global Evaluation Agenda promotes a positive enabling environment for evaluation and evidence-based policymaking advocacy. This means collective action among multiple governmental and non-governmental actors to promote a culture of collaboration; influence the creation of sound policies and the allocation of public resources; strengthen monitoring and evaluation systems; innovate new approaches and methodologies; and insist on the professionalization of evaluators. This is why evaluating progress towards the Sustainable Development Goals (SDGs) is, above all, about governance. Governance can be understood as both a means and an end (Monkelbann, 2019). An end, because countries are primarily responsible for achieving the Agenda 2030 for Sustainable Development, and it is essential that they make progress in strengthening the capacities of their institutions to ensure accountability and transparency, and in forging participatory societies that demand accountability for development results through evaluation. Faced with the complexity of achieving sustainable development, all forms of collaboration count, and voluntary work carried out by EvalYouth chapters has a relevant role in increasing the involvement of youth in the promotion of evaluation. Additionally, governance demands the revitalization of multi-stakeholder partnerships for sustainable development (SDG 17). According to the 2030 Agenda, these partnerships should share knowledge, resources, and innovations for the achievement of the SDGs. Faced with the complexity of achieving sustainable development, all forms of collaboration count, and voluntary work carried out by EvalYouth chapters has a relevant role in increasing the involvement of youth in the promotion of evaluation as a means of promoting transparency and accountability in development policies. They also bring together and make positive use of the volunteer work of YEEs to further their own professional development and leadership in the evaluation ecosystem. In addition, the creation and sustainability of EvalYouth chapters is an exercise in network governance. However, encouraging the participation and self-organization of young evaluators is a goal that should not be underestimated, as it entails multiple challenges. Some reflections, in the context of the EvalYothLAC experience, are outlined below. 1. YEEs initiatives are spaces that contribute to reducing institutional disaffection (Torcal, 2006) among young people According to the University of Cambridge, young people in Latin America currently experience two phenomena: the first is democratic apathy; that is, scepticism about institutions and a low interest in getting involved in politics. The second is democratic antipathy, which is generated when there is a systematic exclusion of youth and the violation of their rights by the state. [2] This provokes a rejection of democracies, especially when they are living in unstable political regimes, whose institutions do not encourage youth development and participation. Considering the challenges currently faced by Latin American democracies, EvalYouth initiatives are spaces in which values of good governance, such as citizen participation, and transparent and accountable institutions, are promoted. Thus, YEE networks can contribute directly or indirectly to diminishing youth apathy and democratic antipathy. 2. YEE chapters represent a network governance challenge in themselves Sustainable development issues are complex, and often beyond the capacity of individual states. The active involvement of non-governmental actors is fundamental to making progress in tackling global challenges. The ideal of network governance is horizontal government, which functions particularly through self-organization and inter-organization among diverse actors and citizens. 3. Coordination and involvement of young people Young people are not typically taught how to coordinate themselves to solve public problems, much less how to get involved in voluntary initiatives. EvalYouth chapters provide a platform for young individuals to exercise their rights to free association and citizen participation. 4. Challenges of self-organization YEE chapters face challenges related to self-organization and collaboration with other actors in the evaluation ecosystem, such as VOPEs, international organizations and even governmental actors. Endogenously, the capacities of members to coordinate, communicate, and achieve common goals vary significantly. 5. Importance of trust and incentives Trust-building and creating incentives for voluntary participation are crucial for successful self-organization within YEE chapters. It is important to note that work within the framework of EvalYouth chapters is voluntary, and although they are coordinated by elected leaders, this does not give them a position of authority over other members of the chapters. A leader’s task is to coordinate and steer their chapter’s different aims, resources, knowledge, and efforts. To achieve this purpose, self-organization is crucial, but this requires weaving bonds of trust and creating incentives to encourage voluntary participation. Governance is continuously associated with institutions, but rarely with human interactions. Yet these human interactions are indispensable for cooperation. 6. Motivation and governance for SDGs Motivation is essential to sustain the evaluation ecosystem, while governance mechanisms enable its functioning. Effective and affective governance is seen as necessary for evaluating and achieving the SDGs. EvalYouthLAC seeks to contribute to both enabling the participation of YEEs, and encouraging their self-organization towards their professionalization while supporting innovation in our practice. References Monkelbaan, J. (2019). Governance for the Sustainable Development Goals. Singapur: Springer. Torcal, M. (2006). “Desafección institucional e historia democrática en las nuevas democracias”. Revista SAAP. 2:3, agosto, pp. 591-634. Miriam Ordoñez holds a PhD in Development Studies in Latin America from the Mora Institute. For 12 years she has been a public official, professor, researcher, external consultant, and volunteer in strategic planning, monitoring, and evaluation. Currently, she co-leads the Executive Committee 2023-2025 of EvalYouth LAC which seeks the professional development of young and emerging evaluators. Reach out to her via miriam.orbal@gmail.com. _ [1] EvalYouth Mexico was initially formed in 2017 within the National Academy of Mexican Evaluators (ACEVAL) by Gerardo Sánchez, Daniela Dorantes, Evelyn Aguado, and me. [2] Luminate (2022). Youth and Democracy in Latin America. Retrieved from: https://luminategroup.com/storage/1459/EN_Youth_Democracy_Latin_America.pdf

  • Eval4Action Newsletter #37

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Youth in Evaluation standards: Lessons & action for meaningful partnership with youth in evaluation

    On 7 November 2023, Eval4Action partners, youth organizations, international organizations, Voluntary Organizations for Professional Evaluation (VOPEs), academia, the private sector, and the evaluation community at large, convened virtually to dialogue on the use and uptake of the Youth in Evaluation standards. The event also marked the launch of lessons from a pioneering approach by UNFPA in engaging youth in the formative evaluation of UNFPA support to adolescents and youth. Through a groundbreaking Youth Steering Committee working alongside senior evaluation professionals, this experience has been a model of innovative youth engagement in evaluation. For the first time in the United Nations system, young people were involved in the evaluation as contributors, evaluators and key informants as well as co-managers and co-decision makers. This engagement was also supported by the EvalYouth Global Network. A publication release and film premiere on this experience fostered a broader discussion on practical and actionable strategies to apply the Youth in Evaluation standards in various contexts. Watch the event recording Speaker line-up

  • Eval4Action Newsletter #36

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • A resolution for action: join the efforts to use evaluation to accelerate sustainable development

    By Sarah Farina Broadleaf Consulting On April 26, 2023, the United Nations General Assembly adopted Resolution A/RES/77/283, titled "Strengthening Voluntary National Reviews through Country-led Evaluation." This Resolution, led by Nigeria and co-sponsored by 24 Member States, marks a pivotal moment for the evaluation community. The resolution calls for countries to prioritize country-led evaluations of the Sustainable Development Goals (SDGs) and integrate evaluative evidence into decision-making processes. The resolution also calls on countries to include evaluation components in their Voluntary National Reviews (VNRs). VNRs are the primary platform at the United Nations for countries to share best practices, success stories, and common challenges in SDG implementation. This resolution follows on other work at the United Nations which emphasizes the importance of evaluation, notably a resolution in 2014 that committed to building evaluation capacity. Both the 2014 and the 2023 resolution were supported by EvalPartners, a partnership between the United Nations, Voluntary Organizations for Professional Evaluation (VOPEs), governments, parliamentarians, civil society organizations, development banks and other partners. It has been my privilege to Chair the EvalPartners Task Force that supported this resolution with the help of diverse partners. The Task Force worked closely to support Nigeria and the other Member States that co-sponsored and supported the development of this resolution. The 2023 resolution represents a momentous shift in political will, with countries taking leadership of evaluation and making strong commitments to commissioning and using evaluation to support decision-making, policy formulation, development of national strategies and reporting on SDGs through VNRs. The resolution provides guidance for how countries can use evaluation to create effective and more equitable strategies, plans and policies. The resolution highlights the following key messages: Evaluation is instrumental in providing timely and credible evidence to regain and accelerate progress towards the SDGs. Countries are encouraged to present regular VNRs with a country-led evaluation component. Evaluative evidence can empower governments to improve decision-making for effective and more equitable strategies, plans and policies. Evaluation should include the full, equal and meaningful participation of all relevant stakeholders, including local governments, Indigenous Peoples, civil society organizations, academia, and the private sector. The adoption of resolution A/RES/77/283 is an important step in encouraging broad use of country-led evaluation to accelerate the SDGs, and we must all consider what we can each contribute to implementing the resolution. Bringing this resolution to life will rely on building new relationships and thinking creatively. There are some clear next steps: Raising awareness of the resolution among countries and stakeholders. Building an enabling environment for evaluation at the country level. Developing guidelines and tools for equity-focused and gender-responsive evaluation. Consolidating lessons learned and sharing experience and insights about how to integrate evaluation and evaluative evidence into VNRs. In order to focus effectively on the greatest contributions that can be made by each sector, there are some key roles that can make a difference: Leadership by national governments National governments hold the key to the successful implementation of evaluation. Governments will be most effective where they demonstrate the political will to prioritize establishing an enabling environment for evaluation, build evaluation capacities, foster partnerships with evaluation stakeholders, and embed evaluation mechanisms in their development plans and strategies. Governments have the power to focus evaluation on national priorities, use evaluation to inform evidence-based policymaking and improve and adapt policies and programmes, and direct their investments where they are most effective and impactful. All actors can play a role in advocating for governments to take leadership on evaluation. Governments have the power to focus evaluation on national priorities, use evaluation to inform evidence-based policymaking and improve and adapt policies and programmes, and direct their investments where they are most effective and impactful. United Nations agencies as partners United Nations agencies could prioritize partnering with countries who wish to lead evaluations at the national level to support decision-making and use in their VNRs. This could include supporting countries to develop evaluation frameworks, building evaluation capacities throughout the government agencies with responsibilities related to evaluation, including planning, statistics, evaluation and reporting on VNRs, and fostering a culture of evidence-based decision-making within their government agencies and decision-making structures. Additionally, collaboration among United Nations agencies and evaluation partners is crucial for leveraging resources, knowledge sharing, and ensuring coordinated efforts towards sustainable development. Empowering evaluation commissioners Evaluation commissioners such as funders, government agencies and parliaments play a pivotal role in driving evaluation efforts. They have the opportunity to ensure that evaluation is structured to support national goals, and they can allocate sufficient resources and ensure that evaluation findings are integrated into decision-making processes and policy formulation. Additionally, evaluation commissioners can facilitate partnerships and collaborations to empower various stakeholders to strengthen evaluation systems and practices. Civil society at the core of evaluation VOPEs are essential stakeholders in promoting and supporting evaluation. They play a vital role in capacity building, knowledge sharing, and promoting evaluation practices at the national and sub-national levels. VOPEs will need to keep themselves aware of activities related to the resolution so that their leadership can ensure that VOPEs support and collaborate with others involved in evaluating the SDGs. Many other organizations, networks and initiatives are also joining in the movement to advocate for or build the evaluation field, ensuring that evaluation is inclusive, seen through a lens of gender equity, and responsive to key social and environmental priorities. Besides fostering connections within and between these entities, it's crucial to empower and support young and emerging evaluators in leading the development of evaluation practices aligned with local and global values and priorities. VOPEs will need to keep themselves aware of activities related to the resolution so that their leadership can ensure that VOPEs support and collaborate with others involved in evaluating the SDGs. In support of implementing the resolution, VOPEs, EvalPartners and its networks EvalYouth, EvalGender+, EvalIndigenous, Global Parliamentarians Forum for Evaluation, EvalSDGs and other evaluation actors can prioritize supporting implementation of the resolution. This may include convening evaluation actors to learn and share their experiences, and actively engaging with national governments, United Nations agencies, and evaluation commissioners to advocate for and support the evaluation of SDGs and the inclusion of evaluation in the VNR process. Building the evaluation ecosystem EvalPartners launched a Global EvalAgenda in 2015 to provide a shared agenda for actors across sectors who wanted to contribute to building the evaluation ecosystem. There are many more actors in the ecosystem now than when EvalPartners first launched the Global EvalAgenda. We have the opportunity to learn more about the evolving ways that different organizations and networks are building capacity, developing practice and using evaluation. It is important at this time to acknowledge and value the contributions being made across the ecosystem toward our shared goals. I’m excited to see some of these big picture aspirations reflected in the renewed EvalAgenda currently in development and I hope that anyone who has an interest in evaluation is either part of the process or connects with the process to ensure that the coming EvalAgenda is a true reflection of the potential of our many roles and approaches. If those of us who have a stake in evaluation can align with this resolution and bolster the commitment of countries to evaluation through our advocacy and support to governments to incorporate evaluation into VNRs and use evaluation for decision-making, we have a significant opportunity to contribute to SDG acceleration. I am confident that, together, we can use this resolution to make a real difference in advancing towards the SDGs to improve the lives of people around the world. I urge you to be creative and consider what role you can play in implementing the resolution on “Strengthening Voluntary National Reviews through Country-led Evaluation." This blog was co-published on the EvalPartners website. Sarah Farina is Founder and Principal at Broadleaf Consulting. She chaired the EvalPartners Task Force supporting the recently adopted UN Resolution on evaluation co-sponsored by 24 Member States. She has served as Treasurer for the International Organization for Cooperation in Evaluation (IOCE) and EvalPartners and she is a past President of the Canadian Evaluation Society. Follow Sarah on Twitter and contact her via sfarina@broadleafconsulting.ca.

  • Eval4Action Newsletter #35

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Evaluating the effectiveness of child protection systems: principles, approaches and methods

    By Rai Sengupta Ecorys UK Globally, governments and development institutions are gradually moving from ‘issue-based’ to ‘systems-based’ approaches in child protection. While the former focuses narrowly on specific child protection issues (often addressing the “low hanging fruit”), the latter looks to deliver child protection outcomes at scale, while involving a range of stakeholders at the national and sub-national levels. In essence, a child protection system is a structural framework consisting of “human resources, finance, laws and policies, governance, monitoring and data collection as well as protection and response services and care management” (UNICEF, 2012). This blog examines ways in which the effectiveness of child protection systems can be innovatively evaluated, by focusing on the principles and methods that can be employed to assess the extent to which child protection systems meet their desired objectives. Evaluation of child protection systems is crucial, for it generates vital evidence to inform policy and programming, in pursuit of SDG 16.2 (protect children from abuse, exploitation, trafficking, and violence). How can we evaluate the effectiveness of child protection systems? The OECD Development Assistance Committee (DAC) highlights six key criteria for evaluating interventions, one of which is effectiveness. Effectiveness is defined as the extent to which an intervention meets its intended objectives, the process through which this was accomplished, and the factors that influence intended and unintended consequences. Evaluating effectiveness is essential for improving programme performance through subsequent course correction (OECD, 2021). Evaluating whether an entire child protection system – comprising a range of individuals and institutions – is effective involves a complex task. At one level, it requires clarity on intended objectives and measures of effectiveness across various sub-systemic components. Further, it requires analysis of outcomes achieved for all sub-groups of children, not just vulnerable children (Wessells, 2014). The following sections highlight evaluation principles and methods through which the effectiveness of child protection systems can be evaluated. Evaluation principles A child protection system consists of a range of services including ‘evidence-based programmes, practices, processes and workforce development’ (Molloy et al., 2017). Rigorously evaluating the effectiveness of such a complex, multidimensional system requires the perspectives of various stakeholders to be incorporated. This can be accomplished through a participatory evaluation. Participatory evaluation involves the relevant stakeholders in a policy/programme across various stages of the evaluation process - from developing the evaluation framework to data collection and validation of findings and recommendations to facilitating the use of the evaluation. In the context of evaluating child protection systems, it is imperative to promote participatory methodologies, wherein administrators, practitioners, and beneficiaries engage throughout the evaluation process to provide inputs, validate findings and co-develop policy recommendations (INTRAC, 2017). Adopting a participatory approach is likely to improve the quality of information gathered. Further, such an inclusive evaluation design is more likely to provide true estimates of systemic effectiveness. For instance, public records on child protection are likely to contain outcomes associated with the establishment of child protection infrastructure (for instance, child welfare committees). However, direct and indirect beneficiaries (for instance, the children and their families/communities) would be more reliable sources of information on child welfare outcomes (Joynes and West, 2018). Analyzing only the first kind of outcomes would provide a superficial impression of systemic effectiveness, while triangulating both forms of evidence through a participatory process would ensure more reliable estimates. The need to include children’s voices in evaluating child protection systems resonates across various child rights institutions. UNICEF (2021) asserts that as service users, children’s feedback and complaints must be incorporated in any assessment of effectiveness across child protection systems. This is vital since pre-existing power dynamics may mask the perspectives of children, while underestimating the evidence gleaned from them. In a similar vein, Save the Children (2019) highlights the need for evaluations to reflect children’s voices, while disaggregating outcomes for gender, ethnicity, and race. This is important for assessing the distribution of intended and unintended outcomes across various socio-economic and demographic categories. Measuring effectiveness for an entire child protection system requires understanding how effectiveness may best be captured for all sub-systemic components. Evaluation methods The methodology involved in evaluating the effectiveness of child protection systems is diverse and highly context specific. However, evaluating effectiveness in most cases begins with examining and/or reconstructing the theory of change – the theoretical pathways linking the various sub-systemic components to their intended outcomes. Doing so helps to develop consensus on ‘what good looks like’ (Molloy et al., 2017) - which sets the expectation on what results are to be measured in any effectiveness analysis. In the development of indicators and targets for each outcome, adherence to global conventions may be a useful starting point. For instance, the Convention on the Rights of the Child (CRC) provides a comprehensive list of principles underpinning effectiveness in a child protection system, which can help frame targets and measure systemic effectiveness, across logical frameworks (Bruning and Doek, 2021). Further, measuring effectiveness for an entire child protection system requires understanding how effectiveness may best be captured for all sub-systemic components. For instance, ‘singling out’ the effect of a specific intervention (say the establishment of a children’s helpline number) may involve the conduct of a Randomized Controlled Trial (RCT), wherein judgements of effectiveness are based on quantitative outcomes, measured across treatment and control groups. However, for components such as practices to enhance parent sensitivity, or laws enacted to promote child rights – effectiveness measurements may require the inclusion of qualitative insights as well. Building upon the participatory and consultative evaluation design discussed above, effectiveness at a systemic level must inevitably leverage evidence – not only from various stakeholders, but also of different types. In essence, any evaluation at the systems level requires a mixed-methods approach, to account for the forms of data available and the formats they are available in. A mixed-methods approach combines qualitative and quantitative evidence in data collection and analysis, thereby building on the strength of each data type, while minimizing the weaknesses associated with any one form of evidence. Such an approach – which may be sequential, parallel or multilevel -- also lends support to the practice of data triangulation, thereby generating more robust estimates of systemic effectiveness (UNICEF, 2017). Evaluating the effectiveness of a child protection system requires a multiplicity of stakeholder perspectives and methodologies. Conclusion In sum, evaluating the effectiveness of a child protection system requires a multiplicity of stakeholder perspectives and methodologies. Adopting participatory principles and mixed-methods analysis is key, particularly for evaluating complex sub-systemic components within this sector. Going forward, it is imperative to develop a common understanding (and a common set of indicators) to evaluate child protection systems globally. Further, leveraging partnerships with development partners, and enhancing government capacity to manage evidence will be crucial for facilitating systemic evaluations. Bibliography Bruning, M. & Doek, J. (2021). Characteristics of an Effective Child Protection System in the European and International Contexts. International Journal on Child Maltreatment: Research, Policy and Practice. Child Protection Systems Task Group. (2019). Strengthening Child Protection Systems: Guidance for Country Offices. Save the Children. INTRAC. (2017). Participatory Evaluation. Author. Joynes, C., & West, H. (2018). Evidence of Effective Child Protection Systems in Practice. Molloy et al. (2017). Improving the Effectiveness of the Child Protection System. Local Government Association. OECD. (2021). Applying Evaluation Criteria Thoughtfully. Author. Wessells, M. (2014). The Case for Population Based Tracking of Outcomes for Children Toward a Public Health Approach in Child Protection System Strengthening. Columbia University. UNICEF East Asia & Pacific. (2012). Measuring and Monitoring Child Protection Systems. Author. UNICEF. (2017). Child Protection Resource Pack: How to Plan, Monitor and Evaluate Child Protection Programmes. Author. UNICEF. (2021). The UNICEF Child Protection Systems Strengthening Approach. Author. Rai Sengupta, a young and emerging evaluator, brings over five years of international experience in monitoring and evaluating large scale development programmes. She presently works as a Senior Monitoring and Evaluation Consultant at Ecorys UK, an international development consulting firm in London, wherein she is part of evaluation teams for large-scale development programmes across India, Viet Nam, Nigeria, and Zimbabwe. Rai has an MSc in Evidence-Based Social Intervention and Policy Intervention from the University of Oxford where she was a fully funded Weidenfeld Hoffmann Scholar. Follow Rai on Twitter and LinkedIn.

  • Eval4Action Newsletter #34

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Eval4Action Newsletter #33

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

  • Call to contribute to the uptake of Youth in Evaluation standards

    Developed by six intergenerational and multi-stakeholder task forces and launched during Youth in Evaluation week in April 2023, standards for enhancing meaningful youth engagement in evaluation target six stakeholder groups: academia, governments, international organizations, the private sector, VOPEs, and youth organizations. The task forces are advancing further uptake of the standards by mobilizing the wider evaluation community and raising awareness of the standards. They also provide guidance for the self-assessment of the standards and identify Youth in Evaluation champions. Quick overview of the standards Building momentum through partnerships is key to the wide adoption of the standards. The task forces and Eval4Action co-leaders invite interested stakeholders, including Eval4Action partners, regional evaluation leaders and young and emerging evaluators to join a relevant stakeholder task force. If you are interested in guiding, contributing, and supporting the roll-out of the standards, write to contact@eval4action.org. If you would like to express your organization’s interest in using the standards, please fill this form.

  • Eval4Action Newsletter #32

    Read updates on the campaign activities and news from partners around the world. If you would like to receive the newsletter directly in your inbox, sign up to receive Eval4Action updates here. As an individual advocate or a partner network, if you have news or information to share with the Eval4Action community, please write to contact@eval4action.org.

bottom of page