Embedding Evaluation’s representations of programs in scientific understandings of “models”.
Session Number: 1056
Track: Systems in Evaluation
Session Type: Panel
Tags: accelerating evaluation theory, practice, Assumptions, Causality, conceptual framework, design thinking, model, model fidelity, Theory-drvien evaluation
Session Chair: Jonathan Morell [Owner - 4.669 Evaluation and Planning]
Presenter 1: Jonathan Morell [Owner - 4.669 Evaluation and Planning]
Presenter 2: Zenda Ofir [International Evaluation Specialist / Honorary Professor - Stellenbosch University]
Presenter 3: Huey T. Chen [Professor - Mercer University]
Time: Nov 14, 2019 (03:45 PM - 04:45 PM)
Room: CC M100 A
Abstract 1 Title: Appreciating the potential and inescapable distortions in the models evaluators use
Presentation Abstract 1:
Logic models or “theories of change” would better serve evaluators and Evaluation if the potential richness and inescapable distortions of models were better appreciated. This assertion will explored by addressing: 1) the use of models that acknowledged different program theories; 2) use of models for different purposes (e.g. prediction and explanation); 3) global versus local behavior; 4) types of relationships (e.g. and/or); 5) models as indicators of the state of our knowledge about a program, and 6) models as purposeful simplifications of reality. The presentation will assume that current usage of logic models is valuable and should continue, but that it would be worthwhile to consider other possibilities for how logic models can contribute to understanding program theory, devising methodology, interpreting data, and interacting with stakeholders.
Abstract 2 Title: The world is in crisis and our (logic) models are failing us
Presentation Abstract 2:
Our societies and ecosystems are in crisis. Evaluators have to think deeply about how best to contribute to solutions to the complex development problems the world faces today from local to global levels. Yet the models that we use to guide our work fail in two important ways: (i) They do not interrogate the dominant values and narratives that underlie how we think about and evaluate change; and (ii) They do not focus explicitly enough on how to accelerate (large systems) change towards the world we want. This presentation will highlight how our conceptualization and design of logic models might have to change if we wish to remain useful as a profession and practice in the era in which we now find ourselves.
Abstract 3 Title: Advancements in the Action Model/Change Model Schema for Serving Program Accountability and Improvement Needs
Presentation Abstract 3:
There is a tradeoff between a model’s complexity and usefulness. If a model is too simple, it provides minimal information. On the other hand, if a model is too complicated, few evaluators would apply it. The action model/change model schema is more complicated than logic models, but its applications have demonstrated many merits. Evaluators use the schema to assist stakeholders in making explicit of their prescriptive assumptions (what actions must take?) and descriptive assumptions (what change processes are expected to generate?) underlying the programs. Besides guiding evaluation activities, the schema has been used to address challenging issues such as how to enhance credibility in assessing real-world programs, how to improve an evaluation’ generalizability, and how to a holistic diagnosis of a program’s vulnerable areas.
Audience Level: All Audiences
Session Abstract (150 words):
Draw a pretty good picture of how a program is thought to operate and what it is to accomplish. Take the picture seriously. That advice can get you a long way toward doing good evaluation. But a deeper look will reveal that the “picture” is really a “model”, in the scientific and epistemological sense of the term; and models have deep connections with understanding how the world works, how to find out how the world works, and limits on what one can know. This panel will explore those deeper connections with an eye toward turning logic models into powerful tools for more insightful data interpretation. Topics addressed will include models as simplifications, as theories, as expressions of values, and as guides to methodology.
For questions or concerns about your event registration, please contact email@example.com or 202-367-1173.
For questions about your account, membership status, or help logging in, please contact firstname.lastname@example.org.
Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 11, 2019. Email cancellation requests to email@example.com. All refunds are processed after the meeting. After October 11, 2019 all sales are final. For Evaluation 2019, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for approved international requests.