Evaluation 2017: From Learning to Action

View Printable Version

Three- and Four-Armed Randomized Trials as Tools for Examining Multiple Policy Interventions: Risks and Rewards

Session Number: 2375
Track: Design and Analysis of Experiments
Session Type: Panel
Session Chair: Diana Epstein [Senior Evidence Analyst]
Discussant: Stephen H. Bell [Senior Fellow & Principal Scientist - Abt Associates Inc.]
Presenter 1: Jacob Alex Klerman
Presenter 2: David C Stapleton [Senior Fellow and Director, Center for Studying Disability Policy - Mathematica Policy Research]
Presenter 3: Anne Fletcher
Presenter 4: Laura Peck [Principal Scientist - Abt Associates]
Presenter 5: Scott Gibbons
Time: Nov 08, 2017 (04:30 PM - 06:00 PM)
Room: Washington 5

Abstract 1 Title: Multi-Armed Randomized Trials: Why and a Typology
Presentation Abstract 1:

Beyond knowing if a program “works”, we often want to know which version of a program works best.  Random assignment is widely viewed as the preferred approach to establishing whether a program “works” and how large are its impacts.  Analogously, if we want to know which version of a program works best, we should randomize people to the program, a variant of the program, and an untreated control group.  Such trials with three or more options are known as “multi-armed randomized trials.”  This presentation provides a foundation for the proposed panel by motivating  such multi-armed randomized trials and providing a typology of approaches regarding the multiple treatment groups tested (each with a no-intervention control group):   (i) two different programs;  (ii) the base program and an augmented version of the base program;  and (iii) the base program and a narrower program.  For each of these archetypes, the presentation considers what can be learned and challenges with implementation. 


Abstract 2 Title: Origins of and Payoff from the BOND Three-Way Experiment
Presentation Abstract 2:

The Social Security Administrations Benefit Offset National Demonstration is a very large, multi-armed randomized trial designed to test the impacts of experimental earnings rules for Social Security Disability Insurance on beneficiary earnings, benefits and other outcomes. In a first stage, beneficiaries (non-volunteers) were randomly assigned into treatment arm (T1—almost 80,000 subjects), a control arm (C1—approximately 900,000 subjects) or a second stage recruitment pool. In the second stage, volunteers from the recruitment pool were assigned into one of two treatment arms (T21 or T22), distinguished by the counseling services provided, and a second control arm (C2), each with about 4,000 subjects. We will describe the information demands the underlie this complex structure and consider the extent to which the structure is actually delivering the information it was designed to provide.


Abstract 3 Title: Origins of and Payoff from the Family Options Four-Way Experiment
Presentation Abstract 3:

The Family Options Study is a randomized control trial that tests which types of housing and services interventions work best for homeless families. The U.S. Department of Housing and Urban Development sponsored the study to compare three active interventions— long-term housing subsidies (SUB), community-based rapid re‑housing (CBRR), and project-based transitional housing (PBTH)—to one another and to the usual care (UC) available to homeless families in their communities. The study’s impact analysis examines the differential effects of the three active interventions and the net effect of each intervention compared to usual care in six two-way experiments. Implementing the random assignment design presented several challenges. For an intervention option to be available to a family undergoing random assignment, at least one slot had to be available at an intervention provider for which the family met provider-specific eligibility requirements. These factors cumulatively resulted in most study families not having all four assignment options available to them at the time of random assignment. Families were randomly assigned among available interventions, and the impact analyses conducted pairwise, comparing families who were eligible for both interventions and randomized to one of them. 


Abstract 4 Title: Origins of and Payoff from the HPOG Three-Way Experiment
Presentation Abstract 4:

The Health Profession Opportunity Grants (HPOG) Program, authorized in 2010 and funded through the U.S. Department of Health and Human Services, offers career pathways-based education and training in the healthcare sector to low-income individuals.  The impact evaluation of the first round of HPOG grants comprises 49 distinct programs. In order to evaluate the effect of three specific program components, the evaluation incentivized grantees to add a third experimental arm in those places where the given program component was not already implemented.  As a result, applicants were randomized to a control group, to the standard HPOG group, or to the enhanced HPOG group, which gave them access to one of three program components—emergency assistance, non-cash incentives, or facilitated peer support—in 19 programs.  This presentation will examine the process for establishing the third experimental arm, highlighting the motivation to do so as well as the risks and the potential payoffs, and how lessons from the HPOG case can apply to future research.  The presentation may also include discussion of impact results if they are available. 


Abstract 5 Title: Origins of and Payoff from the REA Four-Way Experiment
Presentation Abstract 5:

There is a long history of programs to help Unemployment Insurance (UI) claimants find jobs faster.  Evaluations consistently show that those programs moderately cut participants’ time on UI.  The driver of this decrease is controversial—with important implications for program design.  Program officials and local workforce staff emphasize the role that reemployment services play in speeding job finding.  In contrast, formal evaluation evidence can be read as implying that impacts are due to the requirement to participate in reemployment programs and the programs’ enforcement of active job search requirements.  To increase understanding of the relative importance of these two causal pathways, USDOL/Chief Evaluation Office and Abt Associates designed and implemented a four-armed random assignment evaluation.  In particular, one of the treatment arms included the requirement to attend reemployment assistance sessions but only receive minimal services.  This presentation outlines the motivation for the study, discusses challenges in implementing the design, briefly presents results, and considers the results’ implications for policy and research design.


Theme: Learning to Enhance Evaluation
Audience Level: All Audiences

Session Abstract (150 words): 

Randomized trials to measure policy effects, with a treatment group compared against a control group, are now commonplace. Less common, but beginning to emerge, are social experiments that employ three or more “arms,” using random assignment to divide the sample between multiple intervention models plus a control group. The proposed panel will examine the challenges, costs, and rewards of this design. An opening presentation will provide a typology of multi-arm experiments and their utility (and challenges). Four case studies will follow, in which evaluators of employment and housing interventions will explore the risks and rewards of designing and carrying out their particular three-arm and four-arm randomized trials. Drawing from other speakers’ remarks, the discussant will consider how often—and under what circumstances—multi-arm experiments should be undertaken in social policy research. Audience interaction with the speakers will conclude the panel.



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 16, 2017. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 16, 2017 all sales are final. For Evaluation 2017, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests. The $50 fee will be waived for registrants who planned to travel into the US and experienced international travel issues.