Professionalizing Monitoring and Evaluation - Experiences on Certification and Competencies: In practice, not just in theory!

Session Number: 1975
Track: Organizational Learning & Evaluation Capacity Building
Session Type: Panel
Tags: Capacity Development
Session Chair: Gabriela Perez-Yarahuan [Professor - CIDE (Centro de Investigación y Docencia Económicas)]
Discussant: Urmy Shukla
Presenter 1: Lisandro Martin [Senior Portfolio Manager - International Fund for Agricultural Development]
Presenter 2: Juan Pablo Gutierrez, PhD [Researcher - National Institute of Public Health]
Presenter 3: Candice Morkel, Senior Technical M&E Specialist [Senior Technical M&E Specialist - Centre for Learning on Evaluation and Results (CLEAR) - Anglophone Africa]
Presenter 4: ESTEBAN TAPELLA [PROFESSOR - UNIVERSIDAD NACIONAL DE SAN JUAN & RELAC]
Presentation 1 Additional Author: Raniya Khan
Presentation 2 Additional Author: Hemali Kulatilaka [Senior Technical Specialist - Capacity Building - MEASURE Evaluation]
Time: Nov 10, 2017 (01:45 PM - 03:15 PM)
Room: PARK TWR STE 8211

Abstract 1 Title: Learning about what works in Evaluation Capacity Development: Establishing a Global Training and Certification Framework on M&E in rural development


Presentation Abstract 1:

The International Fund for Agricultural Development and the Centres for Learning on Evaluation and Results (CLEAR) are partnering to establish a global M&E training and certification scheme with a sectoral focus on rural development to build in-country capacities. Following our 2017 AEA  presentation on the designing of this programme, as implementation evolves, this session aims at informing participants in "what works" in developing a sector-specific M&E curriculum and certification framework. The session will inform participants on the curriculum development process and how to overcome the divide between ‘monitoring’ and ‘evaluation’ by having training modules focused on evidence-based decision making in rural development. The benefits of sharing and teaching good evaluation practices aims at closing data gaps that are fundamental in improving evidence-informed decision making in development practices. Learning from the IFAD-CLEAR experiences could assist others to assess the added value of M&E training and certification and eventually develop similar schemes.


Abstract 2 Title: Harmonizing evaluation education across continents: GEMNet-Health

Presentation Abstract 2:

Using a participatory approach from a set of institutions around the world (in that sense with a global perspective) and the specific view from low and middle income countries, GEMNet-Health*, with support from USAID’s MEASURE Evaluation project has developed  a set of core competencies that are expected to be covered in postgraduate courses on evaluation in public health faculties. Reaching a consensus on those competencies, organized on five core themes (characteristics of evaluation, evaluation theory, evaluation design and methods, practical considerations & ethical implications, & communication of results),required first agreeing on a common approach to evaluation that was developed among the partner institutions. Challenges remain on how to better translate those competencies into course contents and how to fill the gap on contents that are not offered currently by GEMNet-Health members.


Abstract 3 Title: Lessons learned in collaborative competency and curriculum development on the African Continent: a case of South Africa, Uganda and Benin
Presentation Abstract 3:

CLEAR-AA has partnered with the governments of Uganda, South Africa and Benin in an M&E peer-to-peer learning programme (Twende Mbele).  Its Collaborative Curriculum Development project is in response to the lack of consensus on the African continent around the competencies required for M&E practitioners.   Very few training providers use a competency framework to design their training programmes.  Evaluation Capacity Building (ECB) is therefore fragmented, contributing to inconsistent levels of capacity to do M&E.   Lessons learned from the evolution of a collaborative competency and curriculum-building process, built on consensus and peer review, will provide insights into the development of a more cohesive capacity building agenda for the African continent.   A heightened focus on rigour and quality may result in the demand for higher standards in monitoring and evaluation, including the use of evaluators and government M&E staff with a set of recognized competencies


Abstract 4 Title: Evaluation Standards for LAC Region: from theory to practice.
Presentation Abstract 4:

To contribute to the development of a common evaluation framework, professional training and practice, facilitating communication and promoting an evaluation culture, ReLAC promoted (2015-2016) an open forum to create its own set of Evaluation Standards, rooted in Latin American and Caribbean region. ReLAC is disseminating these evaluation standards, and promoting their use. Since Evaluation Standards publication was launched in September 2016, we have been broadcasting the document in their national networks, stimulating discussions of its content in conferences, workshops, and virtual courses and forums.

This presentation will motivate discussion on what we have learnt on the process to use, adopt and adapt the standards in the LAC region, what works and what should be reformulated  It requires deliberation and mutual agreement on how to be adopted, considering the plurality of stakeholders involved in the field of evaluation: evaluators, policy makers, program managers, participants in programs and projects evaluated, etc.  


Theme: Learning to Enhance Evaluation
Audience Level: All Audiences

Session Abstract (150 words): 

Organizations across the world have taken systematic and formal steps to promote the professionalization of evaluation practice.  There are many valid questions and debates on issues around professionalization topics – the value, options for implementation, etc.  Recently organizations working in international development have been leading, testing, and learning from their professionalization efforts. The ultimate aim of these efforts is to build evaluation capacity that promotes more and better evidence for decision-making. A professional evaluation practice usually entails structuring expected behaviors, developing competency frameworks, providing useful and quality training, and implementing clear, structured and reliable assessments of learning.  Panelists working in evaluation capacity development (ECD) projects across Africa, Asia and Latin America will share their experiences doing this. The session’s aim is to learn what others have done, to draw lessons from these experiences, and discuss how to move ahead with ‘action coming from this learning’ (AEA’s 2017 theme).