Evaluation 2017: From Learning to Action

View Printable Version

What we’ve learned from putting Adaptive Learning and Experimentation into practice

Session Number: 2192
Track: Organizational Learning & Evaluation Capacity Building
Session Type: Panel
Tags: Accountability, adaptive evaluation, citizen engagement, Education, Evalation Impact, Governance, Health, literacy, Mobile Apps for Evaluation, monitoring & evaluation, organizational learning, pneumonia, rapid cycle evaluation
Session Chair: Jean Arkedis [Director - Results for Development]
Presenter 1: Melissa Marie Chiappetta [Director of the Center for International Evaluation - Abt Associates]
Presenter 2: Cammie Lee [Program Director / Tanzania Country Director - Results for Development Institute (R4D)]
Presenter 3: Luke Heinkel [Senior Program Officer - Results for Development]
Presenter 4: Jessica Creighton [Assistant Director, Transparency for Development - Harvard Kennedy School]
Time: Nov 08, 2017 (04:30 PM - 06:00 PM)
Room: Madison A

Abstract 1 Title: Leveraging collaboration to scale and share adaptive learning
Presentation Abstract 1:

USAID launched the MERLIN initiative to develop innovative tools and improve existing processes for monitoring, evaluation, research, and learning in international development projects. The Rapid Feedback MERL Consortium (led by R4D, in partnership with Mathematica, Abt Associates, and the Notre Dame Initiative for Global Development) applies proven evaluation methods to test the effectiveness of specific components of an activity against alternative intervention options. This is done in rapid cycles to allow for timely feedback and course adjustment earlier than is typically done using standard methods. The first pilot of this approach is in partnership with Family Care First (FCF) in Cambodia. FCF is applying the Rapid Feedback MERL approach to test interventions intended to improve child protection. This presentation will share lessons learned from applying the approach with several implementers in Cambodia and with other pilots in South Asia and East Africa.


Abstract 2 Title: Saving lives from pneumonia: using monitoring, evaluation, and operational research to scale up cost-effective programming
Presentation Abstract 2:

R4D launched a new pneumonia treatment scale-up program in Tanzania working with government partners to ensure nation-wide distribution and stocking of amoxicillin dispersible tablets in public sector facilities. The main focus is on using robust program monitoring, evaluation, and operational research techniques to determine how we can increase access to and appropriate targeting of amoxicillin dispersible tablet for all pneumonia-affected children in Tanzania. Using multiple rounds of facility surveys, R4D is helping the Ministry of Health not only monitor whether these upstream efforts lead to increased downstream availability in health facilities across the country, but also learn how to analyze the data to understand why. The program also anticipates that scaling up access to medicine is not sufficient to avert pneumonia deaths cost-effectively; to run a cost-effective program, appropriately diagnosing children with pneumonia to ensure that those who need treatment receive it and drugs are not wasted on those who do not require treatment.  Formative research will estimate the rates of misdiagnosis and their causes and the team will attempt to design, pilot, assess, and scale-up potential interventions to improve diagnosis rates and the quality of care.


Abstract 3 Title: How and when to get “lean” feedback on program implementation and effectiveness
Presentation Abstract 3:

The question of how and when to generate meaningful feedback through adaptive learning techniques is a key area of focus for designing successful engagements. This presentation will focus on two projects which have undergone multiple rounds of feedback (what we call “Learning Checks”), and how they can help set a program up for successful scaling. Rising Academy Network (RAN), launched in 2014, is a high quality, low-cost private school network of Junior Secondary schools operating in Sierra Leone. Together RAN program managers and R4D have been testing three interventions to improve literacy outcomes for the school’s struggling readers. Secondly, R4D has been working closely with Worldreader and other partners over the last year to increase the practice of reading to children in India through the Read to Kids program. R4D will share lessons learned from adapting this program based on tracking both community outreach activities and the use of a mobile phone application.


Abstract 4 Title: Iterative piloting for smarter transparency and accountability program design
Presentation Abstract 4:

Led by the Harvard Kennedy School and Results for Development Institute, the Transparency for Development (T4D) program is designed to investigate the questions of whether well-designed transparency and accountability (T/A) interventions improve health outcomes and under what conditions.  The overarching goal is to generate actionable and rigorous evidence for practitioners, researchers, and other stakeholders working to improve health, accountability, and citizen participation, utilizing an intervention co-design process (with local CSO partners) and a mixed method evaluation (randomized controlled trial and qualitative evaluation).  This presentation will focus on the experience and early lessons from co-designing the intervention through a process of iterative piloting across multiple countries and stages, a process that ultimately led the research and implementation team to a community-led scorecard that is currently being evaluated at scale in Tanzania and Indonesia.


Theme: Learning What Works and Why
Audience Level: All Audiences

Session Abstract (150 words): 

Over the past several years, the development community has increasingly appreciated the need for adaptive approaches to monitoring, evaluation, and learning. However, less is known about how to design and implement Adaptive Learning approaches that meaningfully contribute to program success. When should programs generate feedback and when should they iterate? What methods can they use based on short time frames and limited resources? When is a program “ready” for an impact evaluation vs. undergoing formative research and iteration? How can lessons learned from programs be scaled and shared to others iterating on similar design and implementation challenges? This presentation will use several Adaptive Learning projects to discuss key considerations such as these for putting this approach in practice.



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 16, 2017. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 16, 2017 all sales are final. For Evaluation 2017, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests. The $50 fee will be waived for registrants who planned to travel into the US and experienced international travel issues.