Evaluation 2018: Speaking Truth to Power

View Printable Version

Working with Assumptions: Understanding how Evaluators Make Decisions about Capturing Reality/Context

Session Number: 1852
Track: Systems in Evaluation
Session Type: Panel
Tags: Assumptions, Consideration of systems, context, values
Session Chair: Katrina L Bledsoe [Principal Consultant - Bledsoe Consulting]
Discussant: Jonathan Morell [Director of Evaluation - Syntek ]
Presenter 1: Guy O'Grady Sharrock [Senior Advisor for Learning - Catholic Relief Services]
Presenter 2: Godfrey Senkaba, MA [Design, Monitoring and Evaluation Manager - World Vision United States]
Presenter 3: Winston Allen [Senior Evaluation Specilaist - USAID]
Presentation 1 Additional Author: Apollo M Nkwake, Credentialed Evaluator (CE) [International Technical Advisor, Monitoring and Evaluation - Education Development Center]
Presentation 2 Additional Author: Apollo M Nkwake, Credentialed Evaluator (CE) [International Technical Advisor, Monitoring and Evaluation - Education Development Center]
Time: Nov 02, 2018 (08:00 AM - 09:00 AM)
Room: Hilton - Veterans Meeting Room B

Abstract 1 Title: Working with assumptions: Preliminary results from a survey in CRS

Presentation Abstract 1:

Assumptions permeate all program design, planning and evaluation activities most especially in complex operating contexts. They can be a resource for or risk to the success of programs and the validity of evaluations. Whether an assumption functions as a resource or a risk depends on the extent to which it is understood and validated. Unexamined and unjustified assumptions are the Achilles heel of development programs. Having conducted a series of 'evaluative thinking' workshops in Ethiopia, Malawi, Zambia and elsewhere, and growing demands from donors for more robust, evidence-based theories of change, CRS has is committed to improving the capacity of program staff to surface and carefully consider the assumptions that underpin their interventions.

On this basis, CRS conducted a survey to explore how program and MEAL staff comprehend and work with assumptions. Findings from this survey will inform CRS’ capacity strengthening activities intended to improve evaluative thinking and, ultimately, program impact and quality assurance. This presentation highlights emerging findings on the types of program and evaluation assumptions considered most crucial, the tools commonly used to examine assumptions, and reflections on how to improve assumptions-aware program design, monitoring, evaluation, accountability and learning.


Abstract 2 Title: Working smartly with assumptions in Programs: Lessons from World Vision
Presentation Abstract 2:

World Vision surveyed field staff on working with assumptions in program management. The survey was aimed at guiding organizational support to program staff in rigorously examining and appropriately addressing assumptions they encounter during design, monitoring and evaluation of programs. The proposed presentation will discuss survey results; highlighting program staff knowledge of and experiences with assumptions, practical examples of the type and nature of assumptions encountered and lessons on what works in dealing with assumptions to improve the overall quality and effectiveness of programs. We will highlight differences-if any between program assumptions and personal assumptions that program staff bring to DME and how these influence program quality. World Vision will use the results in many ways, including influencing revisions in the current program DME frameworks/guidance to ensure clarity in identification, and use of assumptions; methods and tools for monitoring of program assumptions; integrating assumptions examination in evaluation process. World Vision will also collaborate with peer organizations and the broader evaluation community to further develop guidance on program assumptions.


Abstract 3 Title: Working with assumptions in USAID’s Evaluations: Lessons and prospects
Presentation Abstract 3:

A key principle put forward by USAID evaluation policy is that evaluations should be integrated into the design of strategies, projects, and activities. This includes, planning for evaluation and identifying key evaluation questions at the outset. This means that evaluation questions at this stage will be based mostly on the assumptions made with regards to the theory of change informing the design of the strategy, project, or activity. However, there are several practical realities associated with project implementation that have direct implications for planning and designing evaluations, including the assumptions on which the evaluation questions are based. Conducting an evaluability assessment prior to the design of evaluations provide opportunities to explore whether the assumptions on which these planned evaluation questions are based, are still valid. USAID rarely conducts these assessments prior to designing the actual evaluations. Evaluability assessments can help determine (a) whether planned evaluation questions are still valid, (b) whether the reality of project interventions will be able to produce the information required by the evaluation, and (c) what adaptations will need to be made to the evaluation design to fit the realities of project implementation. Effective evaluability assessments may also contribute to increased stakeholder involvement in the evaluation process, as well as increased understanding of the culture and context in which the project is being implemented, to inform various elements of the evaluation design. This paper will explore the prospect of integrating evaluability assessment into the program cycle towards improving the quality and design of USAID evaluations.


Audience Level: All Audiences

Session Abstract (150 words): 

One of the major reasons why evaluation often does not meet stakeholder needs is that interaction between evaluators and stakeholders, the structure of evaluation design, and data interpretation are insensitive to consequential assumptions and values of both parties. We are under no illusion that this problem can be solved. What we do believe is that evaluators need to be more aware of the issue, and that they can and should make wiser and more informed choices about assumptions and how these are represented within the evaluation.  This session explores how evaluators recognize and appreciate their assumptions, and how they may (or may not) make wise choices about them, and implications for constructive dialog with program designers.  We hope this session allows evaluators to discuss their assumptions, and how they can make wise choices about them within their evaluation designs. We will address issues such as complex behavior in program operation and outcome, values, and cultural responsiveness.

 

 



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 8, 2018. Email cancellation requests to registration@eval.org. All refunds are processed after the meeting. After October 8, 2018 all sales are final. For Evaluation 2018, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests.