The Influence of Domain-Specific Metric Development on Evaluation and Design: An Example from National Institutes of Health Technology Development Programs

Session Number: 1539
Track: Research, Technology & Development Evaluation
Session Type: Panel
Tags: Metrics, technology
Session Chair: Elizabeth Hsu [Senior Health Science Analyst - National Institutes of Health]
Presenter 1: Brian Zuckerman [Science and Technology Policy Institute]
Presenter 2: Jennifer Reineke Pohlhaus [Vice President - Ripple Effect]
Presenter 3: Michelle A Berny-Lang [Program Director - National Institutes of Health/National Cancer Institute]
Presenter 4: Elizabeth Hsu [Senior Health Science Analyst - National Institutes of Health]
Presentation 1 Additional Author: Rashida Nek
Presentation 2 Additional Author: Julia Rollison [Director, Research and Evaluation - Atlas Research]
Presentation 3 Additional Author: Tony Dickherber [Program Director - National Cancer Institute]
Presentation 4 Additional Author: Tony Dickherber [Program Director - National Cancer Institute]
Time: Oct 27, 2016 (08:00 AM - 09:30 AM)
Room: L404

Abstract 1 Title: Development of Measures to Assess NIH Technology Development Programs
Presentation Abstract 1:

The National Institutes of Health (NIH) supports innovative technology development as one aspect of fulfilling its mission. In July 2014, NIH tasked the IDA Science and Technology Policy Institute (STPI) to develop performance measures for extramural technology development projects. The task had three components. The first was development of a catalog of NIH Funding Opportunity Announcements (FOAs) focused solely on technology development for achieving a specific goal. The second was development of case studies based on discussions with program officers knowledgeable about those FOAs. The third, based on the case studies, was identification of candidate outcome measures for assessing technology development initiatives and development of data collection approaches that would be required to implement these measures in a consistent and ongoing manner. This paper presents the results of the study, focusing on the logic of technology development and candidate outcome measures.


Abstract 2 Title: Process and Outcome Evaluation of the NCI Innovative Molecular Analysis Technologies (IMAT) Program
Presentation Abstract 2:

Ripple Effect performed an extensive evaluation of the NCI IMAT program using a mixed-methods approach to better understand the outcomes of supported technologies. To inform the evaluation design, instrument development, and methodology, Ripple Effect relied on best practices literature on evaluating RTD programs as well as obtained input from an NIH advisory committee and an external subject matter expert panel. Ripple Effect gathered a variety of secondary data from sources such as PubMed, ClinicalTrials.gov, and USPTO, and linked these data in a SharePoint database for continued use by NCI. Ripple Effect conducted more than 80 interviews with IMAT grantees, collected more than 500 survey responses from IMAT and Comparison group grantees, and conducted more than two dozen interviews with technology end-users. This blend of quantitative and qualitative sources allowed for both standardized information across grantees and nuanced contextual data to yield a deeper understanding of the IMAT program.

 


Abstract 3 Title: Technology Program Office Perspective on Identifying Appropriate Metrics

Presentation Abstract 3:

The National Institutes of Health (NIH) support a broad array of innovative technology development research, insofar as it contributes to advancing biomedical research. In 2014, program officers from multiple institutes of NIH gathered to seek appropriate performance measures useful for characterizing outcomes across technology development programs. It was thought that appropriate performance measures would prove useful for program officers across the NIH for two reasons. In the near term, these performance measures would contribute immediately towards new program planning for designing effective program concepts and also for ongoing management of existing programs. These performance measures could also serve directly towards guiding a broad evaluation of NIH investments in technology development, generally, which would both assess the impacts of investments to date as well as allow future program officers the ability to consider the merits of different approaches across NIH using equivalent metrics.


Abstract 4 Title: How Does Technology Development Metric Development Influence Evaluation Design?
Presentation Abstract 4:

While on the surface, the idea of developing technology-development specific metrics seems to be worth pursuing, with deeper digging, a number of questions arise that are worth further discussion. For example:
• Appropriateness:
o When is it appropriate to develop a bank of metrics for a specific domain of projects?
o Should such metrics be developed for technology development programs?
• Generalizability:
o Can the metrics be applied to technology development programs that were not considered specifically during the metric development process?
o Can the metrics be applied to programs not funded by the National Institutes of Health (NIH)?
• Evaluation synthesis:
o Can investments in differing technology programs across NIH be assessed together to estimate an overall "impact" using these metrics?
o Can investments across funding agencies be assessed together?
This presentation will engage the audience around these questions, as we think carefully about metric development and how it influences evaluation design.


Audience Level: All Audiences
Other Information: Wednesday or Thursday slot only please.

Session Abstract: 

Within research, technology, and development evaluation, there are specialized domains for which it may be appropriate to consider designing tailored evaluation metrics.  This session will focus on the development of evaluation metrics for National Institutes of Health-funded projects that have the primary purpose of technology development.  The presentations will provide perspectives across multiple stakeholders in the evaluation design, including: the evaluation professional who must implement a technology-development evaluation without the benefit of pre-validated metrics; the program manager, who may wish to use domain-specific metrics to assess a specific technology development program and provide evidence for the program's "success"; and the evaluation professional who must determine how to develop domain-specific metrics.  The final presentation will engage the audience in conversation around when and how such domain-specific metrics should be used to evaluate technology development programs, particularly for programs that were not explicitly considered when the metrics were developed.