Evaluation 2015: Exemplary Evaluations in a Multicultural World

View Printable Version

Innovation in Evaluation Methods for Government Evaluation

Session Number: GE3
Track: Government Evaluation
Session Type: Multipaper
Session Chair: Ted Kniker [Enlighteneering, Inc.]
Discussant: Ted Kniker [Enlighteneering, Inc.]
Presenter 1: Emily Novicki [Centers for Disease Control and Prevention]
Presenter 3: Ann M. Martin [Science Systems and Applications, Inc.]
Presenter 4: Julia Rollison [Ripple Effect Communications, Inc.]
Presentation 3 Additional Author: Joeletta Patrick [National Aeronautics and Space Administration (NASA)]
Presentation 3 Additional Author: Margaret Pippin [National Aeronautics and Space Administration (NASA) Langley]
Presentation 3 Additional Author: Monica Barnes [National Aeronautics and Space Administration (NASA) Langley]
Presentation 4 Additional Author: Jennifer Reineke Pohlhaus, Ph.D. [Ripple Effect Communications, Inc.]
Presentation 4 Additional Author: Amy Bielski [Ripple Effect Communications, Inc.]
Presentation 4 Additional Author: Anthony Dickherber, PhD [Program Director - National Cancer Institute, National Institutes of Health]
Time: Nov 12, 2015 (08:00 AM - 09:30 AM)
Room: Columbian

Abstract 1 Title: Logic Modeling as a Science, Not an Art: Growing Evaluation Capacity within the National Institute for Occupational Safety and Health
Presentation Abstract 1: In response to the heightened degree of accountability in the performance and review of federally funded activities, the National Institute for Occupational Safety and Health (NIOSH), part of the Centers for Disease Control and Prevention, has been actively working to build its evaluation capacity and better track its outcomes. Ten NIOSH occupational sector programs were asked to participate in the "Intermediate Outcomes Exercise", a unique project that combined elements of logic modeling and summative assessment. Program leaders selected significant outputs for every research project in their portfolio, linked those outputs to intermediate outcomes, and then to pre-established program goals in an Excel spreadsheet. The data were analyzed quantitatively using a formula and qualitatively to help interpret the findings. Beyond producing useful evaluation data, the Exercise also led to many process use gains, including better understandings of logic models, characteristics of effective goals, and evaluation terminology. Lessons learned and recommendations for building evaluation capacity within science organizations will be shared.
Abstract 3 Title: Practical Tips for When and How to Use Social Network Analysis: Examples from Mixed-Methods Evaluations of Federal STEM Education Award Portfolios
Presentation Abstract 3: Government evaluations often focus on resource-constrained contexts where partnerships and collaborations are highly leveraged to achieve outcomes, and thus key evaluation questions focus on these relationships. Social network analysis (SNA) is a mixed-methods solution that can satisfy stakeholders' need to be responsive to accountability requirements while also bringing a significant value-add in terms of institutional learning. This paper will provide practical tips for using SNA to answer key evaluation questions. First, we will consider the deceptively simple nuts-and-bolts of SNA analysis and visualization. Second, we will discuss the practical evaluation value of the technique. Two specific exemplar evaluations from a Federal agency context, NASA, will shed light on the potential of SNA to reveal: How many institutions have been reached beyond those that directly received funding? To what extent has a funding portfolio reached a diverse, cross-national set of organizations? Are certain types of programs more successful in building partnerships?
Presentation 3 Other Authors: Edward Gonzales, edward.v.gonzales@jpl.nasa.gov, National Aeronautics and Space Administration (NASA) JPL
Abstract 4 Title: Constructing a Retrospective Comparison Group Using Secondary Data
Presentation Abstract 4: A comparison group affords evaluators the opportunity to compare changes in specified outcomes in the program of interest to a counterfactual. Forming an appropriate comparison group with limited evaluation resources can be challenging however. This challenge becomes more pronounced when faced with constructing a comparison group after the program has been implemented. This presentation will illustrate one such approach using secondary data for a process and outcome evaluation of the National Cancer Institute's Innovative Molecular Analysis Technologies (IMAT) program. A key component of this evaluation is the development of a comparison group to better assess attribution of outcomes and the unique nature of the IMAT program compared to other NIH programs. Specifically, the evaluators used text mining to create a multi-step filtering process to narrow relevant comparison awards. The utility of this approach for future evaluations will be discussed.
Audience Level: None

Session Abstract: 

Innovation in Evaluation Methods for Government Evaluation



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 6, 2015. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 6, 2015 all sales are final.