Making monitoring more evaluative: Tales from the US and Brazil

Session Number: 1791
Track: Collaborative, Participatory & Empowerment Evaluation
Session Type: Multipaper
Tags: Brazil, collaborative evaluation, Evaluative monitoring, monitoring & evaluation, participatory evaluation
Session Chair: Thomaz Kauark Chianca [Managing Partner - COMEA Relevant Evaluations]
Discussant: Michele Tarsilla [Independent Evaluator and Capacity Development Specialist - Independent Evaluation Consultant ]
Presenter 1: Rita O'Sullivan [Univ of NC EvAP]
Presenter 2: Jenifer Corn [Director of Evaluation Programs - Friday Institute for Educational Innovation]
Presenter 3: ANA LUCIA D IMPERIO LIMA [EXECUTIVE DIRECTOR - INSTITUTO PAULO MONTENEGRO]
Presentation 3 Additional Author: Mônica Pinto [Roberto Marinho Foundation]
Time: Oct 27, 2016 (11:00 AM - 11:45 AM)
Room: A705

Abstract 1 Title: Using Collaborative Evaluation Strategies to Transcend Monitoring with U.S. Federally –Funded International Areas Studies Centers.
Presentation Abstract 1:

Currently he U.S. Department of Education funds 269 grants, totaling $63,354,605 to institutions of higher education to strengthen the capacity and performance of American education in foreign languages, international and area studies, teacher preparation, and international business education. Required on-line monitoring of these four-year projects includes: initial performance measure forms (PMFs) with annual updates in addition to more narrative semi-annual progress reports. Six funded areas studies centers at one university (Africa, Latin America, Asia, Middle East, Europe, and Global Initiatives) banded together to work collaboratively on their evaluation efforts. They engaged an external evaluation group on campus to assist them with their monitoring requirements but also to help them go beyond that in order to evaluate what they considered to be other essential outcomes. External evaluators used collaborative evaluation strategies to accomplish this. This paper will share the progress and process after two years.


Presentation 1 Other Authors: Fabiola Salas Villalobo, Univ. of North Carolina EvAP, fasavi@live.unc.edu, 919 843-7878
Abstract 2 Title: Building Organizational Capacity to Implement Innovations in Education through Collaborative Evaluator-Practitioner Partnerships.
Presentation Abstract 2:

Design-based implementation research (DBIR) is an emerging methodology that helps develop the capacity of entire systems to implement, scale, and sustain innovations in education. The Friday Institute Research and Evaluation (FIRE) team is using a DBIR approach to organize existing and new evaluations in ways that promote effective, equitable, and sustainable improvements in education. DBIR evaluators work closely with practitioners to understand innovation implementation and adaptation through rapid iterations of small changes. Through rapid cycles of iteration, these evaluation collaborations can “produce dramatic change though the accumulation of many small improvements”. This paper will highlight the application of this approach with several clients to facilitate thinking about key aspects of quality, value and importance of whatever is being monitored and creates a system not only to gather relevant evidence, but also to combine them and reach evaluative conclusions


Abstract 3 Title: Using rubrics to monitor coverage, relevance and impact of a Brazilian educational TV channel.
Presentation Abstract 3:

How to monitor a social communication project aiming at social transformations that disseminates educational contents on TV, the web and through capacity building seminars? This was the challenge faced by “Canal Futura”, an educational TV channel with national coverage in Brazil. After an extensive literature search, they realized that: a) it is not enough to limit monitoring to quantitative audience indicators and b) the definition of impact and how to measure it must reflect the initiative’s Theory of Change. Instead of just gathering descriptive data about quantitative indicators, they defined key aspects related to the quality and importance of what they do that needs to be monitored. They devised a system combining information from audience monitoring and from assessments of specific programs, organized in 4 dimensions. Evaluation rubrics were used to clarify the evidence to be gathered and to reach evaluative conclusions about social changes produced by Canal Futura.


Presentation 3 Other Authors: Lúcia Araújo, General manager Futura Channel, lucia@futura.org.br)
Audience Level: All Audiences

Session Abstract: 

 

A common problem with monitoring systems includes the excessive collection of information about quantitative indicators presented descriptively without any effort to synthesize conclusions. Further monitoring requirements are usually delivered from funders without participation of program staff. Often Program managers are frustrated with such systems given they spend too much time filling up detailed monitoring forms, and the information they receive back is too limited to help them make strategic decisions. This session will share three very different examples from the US and Brazil of how monitoring requirements can be managed to enhance greater participation and capacity building of stakeholders around evaluation. It provides examples of efforts that made monitoring systems more evaluative and, therefore, more relevant and helpful to decision makers.