The Promise and Pitfalls of Using Administrative Data for Program Evaluation

Session Number: 2217
Track: Government Evaluation
Session Type: Panel
Tags: government evaluation
Session Chair: Stephanie Shipman [Assistant Director - US Government Accountability Office]
Presenter 1: Stephanie Shipman [Assistant Director - US Government Accountability Office]
Presenter 2: R F Boruch [University of Pennsylvania]
Presenter 3: Goldie MacDonald [Centers for Disease Control and Prevention (CDC)]
Presenter 4: Amy O'Hara
Presentation 1 Additional Author: Valerie Jean Caracelli [Senior Social Science Analyst - U.S. Government Accountability Offi]
Presentation 3 Additional Author: Helen Perry [Team lead, Integrated Disease Surveillance and Respone - Centers for Disease Control and Prevention]
Presentation 4 Additional Author: Melissa Chiu [Data and Evaluation Outreach Manager - U.S. Census Bureau]
Time: Oct 28, 2016 (01:45 PM - 03:15 PM)
Room: International South 8

Abstract 1 Title: Challenges and Resources for Evaluating Federal Programs with Administrative Records
Presentation Abstract 1:

This presentation examines current literature on the use of administrative data in the design of outcome evaluations for federal programs. It will frame the panel discussion by exploring the feasibility and conditions that would facilitate expanding the use of these data for evaluating federal program effectiveness. Drawing from GAO reports and other studies, we will describe the typical uses of such study designs, as well as the challenges faced. Challenges include statutory limitations on access to some agency records, as well as both perceived and actual access barriers to others. Other challenges pertain to agency technical and analytic capacity to assure data quality and accurate record matches across datasets. However, there are guidance and resources to address such challenges. This paper will annotate a list of resources that should assist agencies as they weigh design options in planning outcome evaluations.


Abstract 2 Title: Public Administrative Records on Public School Teachers
Presentation Abstract 2:

High levels of instability in teacher employment have implications for designing evaluations of innovations in schools and districts. This presentation discusses an NSF sponsored project that explores ambient positional instability, as indexed by retention and churn among public school teachers. We will discuss the challenges and successes involved in acquiring, understanding, editing and analyzing administrative records on entire populations of teachers in five States. Each data system involves over 2 million records and drills down to school levels. Acquisition depends on the records’ level of “publicness.” Understanding, editing, and analysis depend on factors such as uniformity/variation in State’s data definitions, formats of public records, time periods, levels of education, and geo-political boundaries. Learning how to capitalize on this biggish, arguably ‘big data’ is important for understanding teacher churn in U.S. educational systems, and for forecasting and human resource management at the school district and state levels.


Abstract 3 Title: Developing Meaningful Indicators to Address Evaluative Questions Using Existing Data—Practice Wisdom from a Multisite Evaluation
Presentation Abstract 3:

In 1998, Integrated Disease Surveillance and Response (IDSR) was established as a comprehensive framework for public health surveillance and response in Africa. In 2006, countries recommended that IDSR guide implementation of the International Health Regulations in the region. Following the unprecedented spread of Ebola in West Africa, public health surveillance remains an essential function for Ministries of Health. Currently, 43 of 46 countries in the Africa region participate in IDSR. However, the degree to which sites adhere to its technical guidelines remains unclear. Multisite evaluation refers to the evaluation of a program (i.e., intervention, set of activities) that operates in more than one location. Programs may be implemented in the same way across sites or slightly differently at each site. The presenters explain how key programmatic functions common to all sites were translated into meaningful, multi-component indicators that rely solely on extraction of existing data from multiple sources.


Abstract 4 Title: Administrative Records and Evaluation in the Federal Statistical Research Data Centers
Presentation Abstract 4:

This presentation discusses the use of administrative records in the experience of the U.S. Census Bureau and a vision for the future. Until recently, administrative records have been incorporated and available for researchers’ use in a limited way via the Census Bureau’s research data centers (RDCs). Certain legal requirements and other parameters severely reduce evaluators’ abilities to conduct evaluation studies using linked administrative records in the RDCs. Recently, however, the Census Bureau has forged a new vision for the future that will greatly expand the administrative datasets available and substantially improve the availability and timing of these data for relevant evaluation studies. The presentation will discuss the Census Bureau’s resources and their potential value for evaluation of federal and other programs, as well as address issues of data quality, documentation, conditions for use, and agile access. Examples will be provided to highlight challenges and successes.


Audience Level: All Audiences

Session Abstract: 

There is growing momentum for evidence-based programming at all levels of government and in many non-governmental organizations. Given ongoing budget constraints nationwide, federal agencies must find appropriate and cost-effective ways to conduct more evaluations that rely on use of data already collected by the government or partner organizations. While some agencies routinely use administrative records to document and assess their operational performance, others face formidable legal and technical challenges in accessing client information located in another organization’s records to assess their program outcomes. This panel explores the broad promise and common pitfalls of evaluation design that uses existing administrative data, with concrete examples from education and public health. The presenters will share a taxonomy of administrative data, resources to aid in accessing agency records, and a new plan from the US Census Bureau intended to expand the availability of data sets for use in program evaluation.