How evaluation contributes to/influences program design?

Session Number: 1766
Track: Program Design
Session Type: Panel
Tags: evaluation use, Program Design, Rubrics Methodology
Session Chair: Michele Tarsilla [Independent Evaluator and Capacity Development Specialist - Independent Evaluation Consultant ]
Presenter 1: Thomaz Kauark Chianca [Managing Partner - COMEA Relevant Evaluations]
Presenter 2: Alexey I Kuzmin [Director - Process Consulting Company]
Presenter 3: Michael Bamberger, Dr. [Independent consultant - independent consultant]
Time: Oct 27, 2016 (02:00 PM - 02:45 PM)
Room: International South 8

Abstract 1 Title: Connecting evaluation to program design through rubrics: The case of the Moving Generation initiative
Presentation Abstract 1:

Thomaz Chianca is an independent international evaluation consultant with more than 20 years of experience evaluating initiatives in different content and geographical areas. Aparecida Lacerda is the general manager for the Professional Education Unit of the Roberto Marinho Foundation and has more than 20 years of experience as a program designer in the nonprofit sector, especially in the education field. They will be discussing a case where the design of an evaluation of a school-based educational program aiming at reducing children and adolescents’ physical idleness actually helped clarifying and improving the design of the initiative. They will explore why and how the approach used to design the evaluation – Rubrics Methodology – contributed to identify some key aspects regarding the program’s implementation and impact expectations that were not thought thoroughly. The methods used to elicit and facilitate discussions about how to tackle those issues will also be addressed by the presenters.


Presentation 1 Other Authors: Aparecida Lacerda, Roberto Marinho Foundation, aparecidal@frm.org.br.
Rosalina Soares, Roberto Marinho Foundation, rosalina.soares@frm.org.br
Abstract 2 Title: Conducting evaluations that contribute to program design: Utilization-Focused approach
Presentation Abstract 2:

In accordance with Utilization-Focused approach, evaluation can make contribution to program design under the following conditions: (a) there are people who really need that, (b) those people have a clear vision of how the evaluation results and/or process will be used to contribute to program design, and (c) the intended uses of the evaluation by the primary intended users guide all other decisions that are made about the evaluation process. This presentation will explore dependence of the conditions mentioned above on three factors: (i) stage of the program life cycle, (ii) level of the program’s adaptivity, and (iii) nature of the expected contributions to program design. Each of these factors will be illustrated by examples from the presenter’s international evaluation practice. The presentation will conclude with recommendations on how to increase the likelihood of using evaluations for program design purposes and what competencies evaluators should develop to make that happen.


Abstract 3 Title: Factors limiting the ability to evaluation to contribute to program design.
Presentation Abstract 3:

Dr. Bamberger has more than 40 years of experience as an evaluator and designer of programs within the international development arena. He has also extensively published about themes within this realm. For this session, he will be exploring some of the main aspects that can limit the possibilities evaluation has to contribute to program design. Some of them include (1) timing for bringing the evaluators on-board, (2) evaluation focus, (3) contacts between the evaluators and designers, (4) resources constraints, (5) little attention to process, (6) inflexibility to look at unintended outcomes, (7) no inclusion of initial or mid-term evaluations, and (8) no inclusion of equity and social exclusion considerations.


Audience Level: All Audiences

Session Abstract: 

Evaluation and planning have always been seen as intermingled functions. Traditionally, evaluations have been advocated to contribute to program improvement (formative) and/or to inform strategic decisions about a program’s faith or the design of new programs (summative). In reality, however, when invited early on to plan an evaluation or a monitoring system for a new program, evaluators often face challenges to fulfill their duty due to poor program design. In many situations, evaluators are not prepared to help staff improve program designs, even though evaluative thinking is probably the most critical tool for those situations. This session will shade light into the discussion about how evaluators can become more helpful in improving the design of programs. We will present cases from different countries and contexts where evaluations helped improve program design, exemplify strategies contributing to such enterprises, and discuss factors limiting the ability to evaluation to contribute to program design.