Evaluation 2019: Paths to the Future of Evaluation: Contribution, Leadership, and Renewal

View Printable Version

Routine Data Wants to Rule the World (of Evaluation)

Session Number: 2639
Track: International and Cross Cultural Evaluation
Session Type: Panel
Tags: Africa, International Health Evaluation, Routine data
Session Chair: Jessica Fehringer [Associate Director - Data for Impact]
Discussant: Erin Luben [Acting Director - Data for Impact (D4I)]
Presenter 1: Elizabeth Gatlin Sutherland [Sr. Advisor- Health Areas, MEASURE Evaluation - Carolina Population Center]
Presenter 2: Emily Weaver
Presenter 3: David Hotchkiss [Professor - Tulane University]
Presentation 1 Additional Author: Stephanie Watson-Grant [Deputy Director, Field Operations - MEASURE Evaluation]
Presentation 1 Additional Author: Khou Xiong [Research Associate - MEASURE Evaluation - UNC Chapel Hill]
Presentation 2 Additional Author: Milissa Markiewicz
Time: Nov 13, 2019 (05:45 PM - 06:30 PM)
Room: CC M100 H

Abstract 1 Title: Using routine health information system data for evaluations: Opportunity and Lessons
Presentation Abstract 1:

As resources for evaluation are often constrained and as many countries’ national HMIS are still being improved, the data they produce are an untapped, invaluable resource for evaluation. National level health data exists in most developing countries and these data are increasingly available at the sub-national level as well. Therefore, routine data can be easily disaggregated by data elements and geography. However, there are challenges with data access and data quality, there are concerns about routine data’s internal validity and in country capacity to connect service delivery realities to evaluation methodologies and analysis. Nonetheless, opportunities exist for more sophisticated longitudinal analysis to complement cross-sectional summaries which are currently the norm. This presentation summarizes our practical experience using routine data sources in developing country contexts and highlights evaluation designs that are well suited to routine data.


Abstract 2 Title: Using performance monitoring data in a retrospective evaluation: The good, the bad, and the ugly
Presentation Abstract 2:

The Tibu Homa Project, which focused on improving quality of healthcare for children age five and under in Tanzania, had collected a wealth of performance management data that was considered for use in the retrospective evaluation. Specifically, the project had extracted thousands of data points from health registers and patient records during monthly visits to health facilities. As a result, the project database contained data that were reported regularly into the routine health information system, plus additional relevant data from facility registers and patient records. Unfortunately, due to problems with the data format, missing information, questions about data quality, and the needs of the evaluation, additional primary data collection was necessary. Challenges and benefits considered when designing the evaluation will be described during the presentation. Lessons learned will also provide insight into planning for use of routine data for the evaluation at the onset of project activities. 


Abstract 3 Title: Using Routine Health Information Systems to Evaluate the Impact of Health Systems Strengthening Projects: An Example from the Democratic Republic of Congo
Presentation Abstract 3:

Despite routine health information systems (RHISs) being in place in nearly every country, RHIS data are regularly overlooked for evaluating the causal effects of health systems strengthening program due to concerns regarding completeness, timeliness, representativeness, and accuracy.  However, RHIS data have a number of advantages, including numerous repeated observations over extended periods of time and a wide range of real-time indicators of service coverage and utilization, which offer researchers the ability to use research designs that are more powerful and less costly than those that rely on intermittent population-based household surveys. In this study, RHIS data from the Democratic Republic of Congo (DRC) are used to evaluate the impact of the USAID-funded Integrated Health Program on the delivery of maternal and child health and family planning services. The specific research design used is the synthetic control method (SCM), which is an increasingly popular method for policy evaluation. The SCM constructed a synthetic control region that simulated what the outcome path of the treated region would have been if it had not undergone an intervention.  Preliminary results are presented, along with a discussion of the advantages and disadvantages of using RHIS data for evaluating health systems strengthening programs in the DRC.

 


Presentation 3 Other Authors: Janna Wisniewski, Paul-Samson Lusamba-Dikassa, Charles Stoecker, Eric Mafuta, Patrick Kayembe, Leslie Craig, Eva Silvestre, Tory Taylor, Tulane University, MEASURE Evaluation
Audience Level: Intermediate, Advanced

Session Abstract (150 words): 

Leading international health organizations are steering away from primary data collection in evaluation in favor of using routine and other existing data. But can routine data live up to the hype? What can it contribute to the future of evaluation? Can we really trust it? Can we access it? Can it answer stakeholder’s key evaluation questions? MEASURE Evaluation and Data for Impact, USAID-funded projects, will address these questions and more in this thought-provoking session. Presenters will share experiences from specific program evaluations using routine national health information system (HMIS) data in the Democratic Republic of the Congo and program data in Tanzania, as well as more generally across multiple MEASURE Evaluation activities using routine data in evaluation. Participants will gain greater understanding of the benefits and pitfalls of routine data use in evaluation so that they can make better informed choices of evaluation designs moving forward.   



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 11, 2019. Email cancellation requests to registration@eval.org. All refunds are processed after the meeting. After October 11, 2019 all sales are final. For Evaluation 2019, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for approved international requests.