We Evaluate Our Programs…But what do we learn?
Session Number: GE4
Track: Government Evaluation
Session Type: TIG Multipaper
Session Chair: Krystal Tomlin [Health Policy Analyst - National Institutes of Health/NIAID]
Presenter 1: Amia Downes [Public Health Analyst - Centers for Disease Control and Prevention]
Presenter 2: Mary Peters
Presenter 3: Julia Rollison [Senior Principal, Research and Evaluation - Atlas Research]
Presenter 4: Heather Menne
Presenter 5: Saloni Sapru [Westat]
Presentation 1 Additional Author: Emily Novicki [NIOSH Program Portfolio and NORA Coordinator - Centers for Disease Control and Prevention]
Presentation 3 Additional Author: Abby Friedman [Senior Manager - Atlas Research]
Presentation 3 Additional Author: Meaghan Williams [Analyst - Atlas Research]
Presentation 3 Additional Author: Clarke Erickson [Manager - Atlas Research]
Presentation 5 Additional Author: Melanie Chansky
Presentation 5 Additional Author: Patricia Green [Epidemiologist - Centers for Disease Control and Prevention]
Presentation 5 Additional Author: Mary Odell Butler [Senior Evaluator - Westat]
Time: Nov 10, 2017 (08:00 AM - 09:30 AM)
Room: Marriott Balcony B
Abstract 1 Title: Using the Contribution Analysis Approach to Evaluate Science Impact: A case study of the National Institute for Occupational Safety and Health
Presentation Abstract 1:
The movement towards greater accountability in government continues to gain momentum today with government-wide efforts such as the Government Performance and Results Modernization Act, the Commission for Evidence-Based Policymaking, and Office of Management and Budget circulars. However, measuring the impact of research programs has proven particularly difficult. Cause and effect linkages between research findings and changes to morbidity and mortality are nearly impossible to prove. To address this challenge, the National Institute for Occupational Safety and Health (NIOSH) research program evaluators have refined the process by which NIOSH’s research programs will be externally peer reviewed to include the evaluation approach of contribution analysis. Although early in the implementation process, these efforts demonstrate the NIOSHs commitment to good stewardship of public funds, its mission to protect worker health and safety, and the advancement of evaluation science in research organizations.
Abstract 2 Title: What’s the deal with language training anyway?
Presentation Abstract 2:
Our evaluation on the Treasury Board of Canada Secretariat’s Centralized Language Training (CLT) Program evaluated the human and financial costs of language training. While the program objective is to support learners in the acquisition and the improvement of their second language, there is often pressure to focus training on what is needed to pass required language tests versus language use. Specific to adults, the evaluation revealed some of the barriers to learning. Specific to training, the evaluation revealed that in most cases group classes are as effective as individual training and more particularly, program data indicated that group classes produce faster results at lower cost. This presentation will discuss the evaluation findings and some of the contextual factors that impacted the program outcomes. This presentation will also provide more detail on the costing analysis completed, particularly the key role of program staff.
Abstract 3 Title: Strengths and Limitations of a Mid-Stage Process and Outcome Evaluation of a Complex Partnered Research Initiative
Presentation Abstract 3:
Evaluating complex, multi-level initiatives to drive decision-making presents inherent study design challenges. This presentation will detail a qualitative approach to a process and outcome evaluation of a Veterans Affairs partnered research initiative and discuss specific challenges associated with the initiative’s complexities and the timing of the evaluation. Each program in the initiative comprised several coordinated, related projects focused on a single common critical research area. Programs were often unique in terms of topical focus, geography, and structure. Individual projects within the programs varied regarding length (i.e., between two and five years), start date (i.e., between January 2012 and April 2016), number of sites (from 1 to 14), and number of partners (from 1 to 13). Almost all of the projects were still in progress at the time of the evaluation. The impact of complexity and timing will be discussed in the context of lessons learned for future similar evaluations.
Abstract 4 Title: Evaluating the National Family Caregiver Support Program: Challenges in Evaluation Design and Data Collection
Presentation Abstract 4:
The Administration for Community Living/Administration on Aging administers the National Family Caregiver Support Program (NFCSP) of the Older Americans Act. Established in 2000, the NFCSP provides grants to States and Territories for a range of supports that assist family and informal caregivers to care for their loved ones at home for as long as possible. The first NFCSP Process Study was released in March 2016 and to complete a full evaluation of the NFCSP, the NFCSP Client Outcomes study began data collection in November 2016. This paper will outline the Client Outcomes Study methods and data collection approach used to examine the program’s impact on caregivers receiving services. A novel approach was used to obtain the sample in order to make comparisons between program participants and non-participants. For comparison to ‘NFCSP caregivers,’ ‘non-NFCSP caregivers’ were identified by older adults who participated in the National Survey of Older Americans Act Participants.
Abstract 5 Title: Evaluating Interprofessional Collaborations for the Prevention of Fetal Alcohol Spectrum Disorders
Presentation Abstract 5:
The Centers for Disease Control and Prevention is supporting collaborations between academic Practice and Implementation Centers (PICs) and national professional organizations (Partners) to prevent fetal alcohol spectrum disorders by improving healthcare practice, education, and awareness among healthcare professionals. We present evaluation findings from this ongoing, four-year project in which PICs and Partners collaborate in discipline-specific workgroups (DSWs) representing family medicine, medical assisting, nursing, obstetrics/gynecology, pediatrics, and social work. We developed process and outcome indicators to assess DSW collaborations and collect data through DSW quarterly progress reports; annual site visit or telephone interviews with PICs and Partners; and an online, semi-annual interprofessional collaboration survey completed by each DSW member. We triangulate data from these methods, capturing lessons learned and promising practices. Based on the evidence that emerges, such as on shared goals, negotiations of roles and responsibilities, and communication methods, we discuss the indicators of effective collaboration in interprofessional health partnerships.
Theme: Learning What Works and Why
Audience Level: All Audiences
Session Abstract (150 words):
We Evaluate Our Programs…But what do we learn?
For questions or concerns about your event registration, please contact email@example.com or 202-367-1173.
For questions about your account, membership status, or help logging in, please contact firstname.lastname@example.org.
Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 16, 2017. Email cancellation requests to email@example.com. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 16, 2017 all sales are final. For Evaluation 2017, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests. The $50 fee will be waived for registrants who planned to travel into the US and experienced international travel issues.