Session Number: 2446
Track: Organizational Learning & Evaluation Capacity Building
Session Type: Panel
Tags: Evaluability, Impact evaluation, Impact evaluation readiness
Session Chair: Lily Zandniapour Ph.D. [Corporation for National and Community Service]
Discussant: Nicole Vicinanza [Aguirre Division, JBS International]
Presenter 1: Lance Potter [New Profit Inc.]
Presenter 2: Peter Lovegrove [JBS International]
Presenter 3: Gabriel Rhoads [The Edna McConnell Clark Foundation]
Time: Nov 13, 2015 (07:00 AM - 07:45 AM)
Abstract 1 Title: Evaluation Readiness: A Case Story Approach
Presentation Abstract 1: Following an overview of the Social Innovation Fund and its evaluation program which requires the design and implementation of high quality evaluations that aim to establish causal impact, the presenter will discuss case story examples to provide context and backdrop for the introduction of the Impact Evaluability Assessment Tool. The case stories will engage the session participants and provide a framework for discussing impact evaluation readiness. Potential strengths and challenges or pitfalls in the design and implementation of an impact evaluation will be discussed and demonstrated through this framework and case examples. The presenter will later draw on the experiences of New Profit’s portfolio of grantees and their ongoing independent evaluations to speak to the utility of using the Impact Evaluability Assessment Tool for gauging grantee readiness and will share the perspective of New Profit, as an intermediary in this process. Additionally, the role of evaluation capacity building will also be discussed as a means of facilitating readiness.
Abstract 2 Title: The Impact Evaluability Assessment Tool: A Social Innovation Fund (SIF) Resource for Gauging Readiness
Presentation Abstract 2: Peter Lovegrove will demonstrate the Impact Evaluability Assessment Tool and will elaborate on the three dimensions of readiness that the tool covers. These include organizational, program, and evaluation readiness. The presenter will then go over elements of readiness under each content category. Emphasis will be placed on the evaluation related prerequisites that need to be in place for an impact evaluation to be successfully designed and conducted. This is a key dimension of readiness which is often overlooked. He will also discuss the tool’s uses and administration. Lastly, the presenter will share experiences of the JBS International, as the Social Innovation Fund evaluation technical assistance provider and discuss challenges and successes of the program’s grantees and subgrantees in conducting impact evaluations as part of their grant requirements, which has informed the development of this tool.
Abstract 3 Title: Evaluation Readiness: Reflections from the Edna McConnell Clark Foundation
Presentation Abstract 3: Gabriel Rhoads will discuss aspects of evaluation readiness including:
• Staff Capacity- Is there staff expertise to co-design and manage the evaluation? Does the organization track performance?
• Learning Agenda- What is the organization interested in learning? Can they articulate a theory of change?
• Budget-Is there ability to fund the learning agenda appropriately?
• Design Readiness -Are conditions in place to conduct an RCT without declining services? If not, is there a reasonable comparison group? Is there sufficient sample?
• Stakeholder buy-in - Are stakeholders and funders aligned with the organization’s learning agenda and the proposed evaluation?
• Implementation Strength & Runway - Has the program been implemented per the theory of change, for a sufficient period of time before the evaluation?
The presenter will tie these elements to the discussion and share experiences of subgrantees, planning evaluations with their partner MDRC, assessing the readiness level of subgrantees, and implications for conducting an impact evaluation.
Audience Level: Advanced
Increasingly public and private sector funders are engaging in evidence-based grantmaking. This has led to a growing demand for high-quality, rigorous evaluations, consistent with principles of scientific research, that yield reliable information about a program’s impact.
Impact studies typically employ experimental and quasi-experimental designs to assess and establish causal attribution, and generalizability beyond the study population. As an evaluator, program practitioner, and grantmaker, how do you know that an intervention is ready for evaluations of this kind?
Building on program case study, the presenters will introduce the Impact Evaluability Assessment Tool and demonstrate how this resource can help evaluators, practitioners and grantmakers assess readiness of a program for rigorous impact evaluation. This tool highlights key dimensions of readiness for engagement in, planning, and implementation of impact studies. It can also be used to identify areas where training and/or technical assistance may be needed in preparation or support such evaluation studies.