Evaluation 2019: Paths to the Future of Evaluation: Contribution, Leadership, and Renewal

View Printable Version

Using Qualitative Data to Measure Implementation Fidelity: Roles, Tools, and Analytic Methods

Session Number: 1371
Track: Qualitative Methods
Session Type: Multipaper
Tags: Implementation Evaluation, Implementation Fidelity, Implementation Science
Session Chair: Hannah Betesh [Senior Associate - Abt Associates]
Presenter 1: Meg Caven [Ph.D. Candidate, Sociology - Brown University ]
Presenter 2: Hannah Betesh [Senior Associate - Abt Associates]
Presenter 3: Catherine Darrow [Education Development Center]
Presentation 2 Additional Author: Joscelyn Silsby [Senior Evaluation Advisor - AARP Foundation]
Presentation 2 Additional Author: Anne Paprocki [Associate - Social Policy Research Associates]
Presentation 3 Additional Author: Kathy Brennan [Research and Evaluation Advisor - AARP Foundation, Experience Corps]
Time: Nov 16, 2019 (11:15 AM - 12:00 PM)
Room: CC M100 D

Abstract 1 Title: Supplementing State Regulation with Qualitative Data: New Roles for Outsiders in Fostering Implementation Fidelity and Legal Compliance
Presentation Abstract 1:

Regulatory agencies are responsible for overseeing social institutions’ implementation of policy reforms. Yet, ample research demonstrates that regulators lack capacity to monitor, in particular, the processes that occur “on the ground” within organizations. Drawing on data from 18 months of qualitative site visit fieldwork spanning schools, districts, and state offices, this paper documents some of the struggles that regulators faced in understanding the implementation of school discipline reform in Massachusetts. These struggles, and the supportive interventions of a group of activist attorneys acting as “private regulators,” highlight the important contribution that qualitative data can make in evaluating implementation fidelity. This paper outlines the ways that evaluators’ qualitative data can support the regulatory capacities of state agencies, thus improving the potential for implementation fidelity.


Abstract 2 Title: Measuring Implementation Fidelity in Emerging Program Models
Presentation Abstract 2:

Measuring program fidelity is critical to accurate interpretation of impact evaluation findings.  But how can fidelity be measured when the intervention under study is a new model  that can be expected to evolve once lessons learned are incorporated from initial implementation? This presentation will share lessons from a quasi-experimental impact evaluation of a relatively new program model for helping older women reconnect to the workforce.  As this model includes multiple components with varying levels of prior evidence, the evaluation involved determining how to both measure fidelity through site visit observations and improve the specification of key model elements in response to early fidelity assessment.  The internal and external evaluators for this project will discuss the development and deployment of a structured fidelity measurement tool, the use of early results to shape technical assistance and guidance, and the process of communicating unexpected findings to program and evaluation stakeholders at multiple levels.


Presentation 2 Other Authors:
Abstract 3 Title: Compiling a Multi-Perspective Index to Inform Program Implementation and Effectiveness
Presentation Abstract 3:

Accounting for implementation reach and quality while assessing the effectiveness of a tested program requires input from a variety of key stakeholders. Evaluators must address not only how and by whom a program was implemented, but also the pathways by which different stakeholders and participants can experience program impact. In this paper and related presentation the program developer sets the context by describing an established K-3 tutoring program in which older volunteers (50+) provide literacy support to struggling students. The evaluation team then offers an implementation fidelity measurement approach for capturing the voices of these tutors, as well as K-3 classroom teachers, school principals, and program administrators. The index merges distinct voices through qualitative and quantitative data from online surveys, site visits, tutoring observations, and tutor session logs. Presenters will share summary implementation results stemming from the 2018-19 School year representing 241 classrooms in 21 schools across four districts.


Audience Level: All Audiences

Session Abstract (150 words): 

Measuring implementation fidelity (i.e., the extent to which program implementation aligns with the intervention’s stated model and theory of change) requires making meaning of qualitative data to support programs’ continuous improvement, and supports interpretation of findings in any impact evaluation of a given program.  This session will focus on three aspects of incorporating implementation fidelity measurement into evaluation: (1) understanding the roles evaluators take on when measuring implementation fidelity, (2) developing and using data collection tools to measure implementation fidelity, and (3) creating an analytic method to use multiple sources of qualitative data to generate implementation fidelity ratings.



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 11, 2019. Email cancellation requests to registration@eval.org. All refunds are processed after the meeting. After October 11, 2019 all sales are final. For Evaluation 2019, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for approved international requests.