Whither the Impact Evaluation? Experiences and Gaps Using Data Sources and Systems for Program Evaluation in Low- and Middle-Income Countries

Session Number: 2844
Track: Design and Analysis of Experiments
Session Type: Panel
Tags: Impact evaluation, Plausibility evaluation, RealWorld Evaluation, secondary data analysis
Session Chair: Heidi Reynolds [Director for Evaluation - MEASURE Evaluation]
Discussant: Heidi Reynolds [Director for Evaluation - MEASURE Evaluation]
Presenter 1: Gustavo Angeles [Research Assistant Professor - Carolina Population Center / University of North Carolina at Chapel Hill]
Presenter 2: Emily A Bobrow [Senior Technical Advisor for Evaluation and Learning - MEASURE Evaluation ]
Presenter 3: Zulfiya Charyeva [Senior Research Analyst - MEASURE Evaluation Palladium]
Presentation 1 Additional Author: Peter Michael Lance, Economist [Research Associate-Evaluation Specialist - Carolina Population Center]
Presentation 1 Additional Author: Jessica Fehringer [Senior Technical Specialist - Evaluation and Gender - MEASURE Evaluation]
Presentation 2 Additional Author: Valerie L Flax [Assistant Professor - University of North Carolina at Chapel Hill]
Presentation 3 Additional Author: Stephanie Mullen [Senior Evaluation Advisor - John Snow, Inc.]
Presentation 3 Additional Author: Sian Louise Curtis [Research Associate Professor - UNC]
Presentation 5 Additional Author: Heidi Reynolds [Director for Evaluation - MEASURE Evaluation]
Session Facilitator: Heidi Reynolds [Director for Evaluation - MEASURE Evaluation]
Time: Nov 08, 2017 (06:15 PM - 07:15 PM)
Room: PARK TWR STE 8209

Abstract 1 Title: Tanzania PS3 Impact Evaluation
Presentation Abstract 1:

PS3 is a USAID-funded project in Tanzania designed to strengthening the health system to provide good-quality services. The USAID-funded MEASURE Evaluation is evaluating the PS3 project and hypothesizes that PS3’s interventions will strengthen the financial system, improve the availability of human resources, and improve the delivery of health services. Evaluating the impact of systems-level interventions is a challenge, in part because the systemic nature of the intervention makes it difficult to find a control group against which to assess impact. To evaluate the PS3 impact, this evaluation’s innovation is to use routinely collected data, to construct “synthetic” controls using pre-treatment information, and to employ the synthetic control analysis method of impact evaluation. This strategy uses a data-driven approach to identify a weighted combination of administrative units not exposed to the program that “synthetically” recreates the unit that does receive the system-level intervention.


Presentation 1 Other Authors: Lisa Calhoun, MEASURE Evaluation
Abstract 2 Title: Use of Secondary Data Analysis to Assess the Contribution of Nutrition to the Global 90-90-90 HIV Treatment Targets
Presentation Abstract 2:

Nutrition is an integral part of the comprehensive global HIV response. Strategic investments have been made to integrate nutrition assessment, counseling, and support (NACS) activities in routine health service delivery and in the HIV treatment cascade at the facility and community levels in multiple countries. Recent systematic reviews reveal gaps in the evidence base. This presentation describes how the USAID- and PEPFAR-funded MEASURE Evaluation is identifying, combining, and analyzing secondary data from research studies in order to identify and fill gaps related to the role of nutritional status and nutrition interventions in the successful implementation of HIV services across the care continuum. We focus on outcomes related to the targets set by the Joint United Nations Programme on HIV/AIDS (UNAIDS) and PEPFAR: i.e., 90 percent of PLHIV will know their status, 90 percent of those diagnosed will receive ART, and 90 percent of those on ART will achieve viral suppression.


Presentation 2 Other Authors: Charlotte Lane, MEASURE Evaluation
Linda Adair, Gillings School of Global Public Health, University of North Carolina
Abstract 3 Title: Impact Evaluation of a Project in Ukraine to Strengthen Control of Tuberculosis
Presentation Abstract 3:

A study in Ukraine by the USAID- and PEPFAR-funded MEASURE Evaluation is using a mixed-methods approach with a quasi-experimental quantitative evaluation design complemented by qualitative descriptive work to inform the findings. The evaluation examines the impact of providing social support services to improve adherence to tuberculosis (TB) treatment and also to improve the integration of TB and HIV services to reduce mortality among co-infected patients. Mixed data collection methods of medical facility surveys, provider interviews, small-group discussions, and patient chart extraction are used to answer study research questions. For the quantitative component, we relied on retrospective extraction of more than 9,500 patients’ medical records. This presentation will discuss our experiences using data from medical records for the evaluation and discuss advantages and limitations for using this data source for evaluation. 


Abstract 4 Title: Measuring the Impact of National-Scale Malaria Interventions Using the Plausibility Approach: What Story can be Told and What Cannot?
Presentation Abstract 4:

The USAID-funded MEASURE Evaluation—in collaboration with Roll Back Malaria’s Monitoring and Evaluation Reference Group and with support from the President’s Malaria Initiative—is using a plausibility approach to combine multiple data sources including survey and routine data to evaluate malaria interventions and child mortality. We have completed studies in 11 countries; three are ongoing. The approach measures trends in intervention scale-up, morbidity and mortality, and other factors to assess whether it is plausible to conclude that the scale-up of interventions resulted in decline of malaria-related outcomes. This approach allows evaluation where interventions are at national scale with limited possibilities of a counterfactual group. However, the approach has challenges related to data availability, changes in malaria control policies and indicators, and limited data on direct measures of malaria mortality and other health interventions contributing to the outcome. We highlight strengths and challenges and discuss ways to strengthen the approach.


Presentation 4 Other Authors: Yazoume Ye (Presenter), MEASURE Evaluation
Samantha Herrera, MEASURE Evaluation
Ismael Nana, MEASURE Evaluation
Tajrina Hai, MEASURE Evaluation
Abstract 5 Title: The Contribution of a Functional Health Information System to Program Evaluation
Presentation Abstract 5:

All of the data sources discussed in this session—routinely collected district-level data, client data, survey data, etc.—contribute to functioning national health information systems (HIS). In low- and middle-income countries, many impact evaluations are commissioned because countries’ HIS do not have the data sources to answer questions about program impact.  While the elimination of the need for collection of primary evaluation data is not the goal, a strong HIS that produces good-quality and timely data across the 12 data sources can decrease the need for special evaluation studies and increase the efficiency of HIS investments. This presentation will discuss the developments in strengthening HIS in order to supply quality and timely information about program effectiveness in target populations, describe what an HIS needs to have to help countries evaluate their programs, and describe the current limitations of HIS to meet this information demand.


Presentation 5 Other Authors: Manish Kumar (Presenter), MEASURE Evaluation
Theme: Learning to Enhance Evaluation
Audience Level: All Audiences

Session Abstract (150 words): 

No single evaluation technique or design yields all the information needed for decision making or can overcome all the challenges encountered in program implementation. Impact evaluations (IEs), in particular, have been criticized for their high cost and lack of timeliness of results, despite their power to yield valid information about program effectiveness at the population level. While not always fair, if these criticisms lead evaluation funders to abandon support of IEs, the result will be gaps in methods and approaches that yield information about program effectiveness in real-life settings, hindering choices among programs and decisions about program scale-up. Innovations in electronic data systems are increasing in number and contributing timely data, but encounter quality challenges and questions about cost-effectiveness. Presentations in this session will share learning from the USAID- and PEPFAR-funded MEASURE Evaluation’s use of innovative evaluation approaches to provide rigorous results and create efficiencies by using existing data sources.