Learning to Action across International Evaluation: A look into Global and Bilateral Learning

Session Number: ICCE9
Track: International and Cross Cultural Evaluation
Session Type: TIG Multipaper
Session Chair: Kristin Lindell [Monitoring, Evaluation, Research and Learning Specialist - USAID LEARN/Dexis Consulting Group]
Presenter 1: Winston Allen [Senior Evaluation Specilaist - USAID]
Presenter 2: Martin de Alteriis [Assistant Director, Applied Research and Methods - US Government Accountability Office]
Presenter 3: Kari Nelson [Deputy Director - Social Impact]
Presentation 1 Additional Author: Allison Haselkorn [Evaluation Specialist - USAID]
Time: Nov 08, 2017 (06:15 PM - 07:15 PM)
Room: Delaware B

Abstract 1 Title: Analysis of the Purpose of Foreign Assistance Evaluations at USAID: Implications for Integrating Evaluation Use across the Program Cycle
Presentation Abstract 1:

Evaluation is useful if it provides evidence to inform real-world decision making. Increased utilization of evaluation findings is a primary outcome USAID expects of the evaluations it commissions every year. However, the Agency is grappling with limited use of evaluations to inform program planning, design, and implementation. Utilization of evaluation findings is an outcome of the purpose for which an evaluation was designed. In our analysis presented in this paper, we reviewed the evaluation purpose outlined in 38 statements of work of evaluations commissioned by USAID through the $455 million EVAL-ME contract mechanism. We found that, 90% of evaluations were designed to measure results, 58% for learning best practices, and 50% to inform design of future projects. The paper explored the linkages between evaluation purpose and actual use of the evaluations, and conclude with an analytical framework for integrating evaluation use throughout the USAID Program Cycle, and increasing utilization of evaluations across the agency.


Abstract 2 Title: Unintended Consequences of Foreign Assistance Programs: Learning from a Systematic Review of U.S. Government-Funded Evaluations
Presentation Abstract 2:

Michael Bamberger, among others, has noted the importance of looking for adverse consequences in evaluations of foreign assistance programs.  Moreover, some U.S. government agencies now require applicants for foreign assistance grants to demonstrate that their programs will minimize any negative consequences. This presentation will discuss the extent to which a sample of recent U.S. Government-funded foreign assistance evaluations considered unintended consequences. The data for this presentation come from a review the U.S. Government Accountability Office (GAO) conducted of a representative sample of 173 evaluations completed or released by six major United States foreign assistance agencies in 2016. The presentation will also describe: (1) the unintended consequences that were found, both negative and positive, (2) what the data can tell us regarding links between particular data collection methods and the reporting of unintended consequences, and (3) some potentially helpful evaluation practices for considering unintended consequences that emerged in this review.


Abstract 3 Title: Sustainability in International Development: Learning from the Evaluation Literature
Presentation Abstract 3:

Finalization of the Sustainable Development Goals in 2015 has drawn attention to the need for sustainability in development.  This focus on sustainability is also reflected in the evaluation literature, where sustainability is a common evaluation criterion.  Learning about sustainability from the evaluation literature, however, is more challenging than might be expected, as there is no single definition of sustainability, and many studies do not clearly define the type of sustainability assessed.  Though some must be inferred, the general types of sustainability examined in the evaluation literature include: sustainability of results, sustainability of funding, organizational sustainability, and environmental sustainability.  To learn from the varied treatment of sustainability within the development evaluation literature, this systematic review of development evaluations examines: the extent to which the type of sustainability is defined, the types of sustainability examined, and the extent to which the evaluated projects are determined to be sustainable.


Presentation Abstract 4:

This case study situates the work of ‘doing’ evaluation within the organizational pressure to deliver results to donors. Offering rich description of the evaluation system mandated by a large donor’s agricultural development project, it is based on thirty-five interviews, with Evaluation Coordinators, Chiefs of Party, and Country Office staff in an African country and donor headquarters. By perceiving evaluation systems as social actors able to influence the actions of professionals and organizations, we are better able to understand what is created by their presence and left in their wake. I argue that that the realities of data collection, amidst indicator targets, undermines the very databank meant to serve in ‘evidence based decision making.’ These factors compound and create a situation where the organizational drive to prove impact at all levels results in a focus on easy-to-measure output rather than long term impact and learning.


Theme: My presentation doesn't specifically relate to the theme
Audience Level: All Audiences

Session Abstract (150 words): 

Learning to Action across International Evaluation: A look into Global and Bilateral Learning