Evaluation 2017: From Learning to Action

View Printable Version

Practice-Informed Theories of Valuing: Elaborating Evaluation Theory to Enhance Evaluation Methodology and Practice

Session Number: 2798
Track: Presidential Strand
Session Type: Panel
Tags: theories of valuing, Valuation, values
Session Chair: Marvin C Alkin [Professor - UCLA]
Discussant: Michael Quinn Patton [Founder & Director - Utilization-Focused Evaluation]
Presenter 1: E Jane Davidson [Owner & Founder - Real Evaluation LLC]
Presenter 2: Brad Cousins [Professor Emeritus - University of Ottawa]
Presenter 3: Julian C King [Director, Julian King & Associates Limited - Kinnect Group]
Presenter 4: George Julnes [Professor, School of Public and International Affairs - University of Baltimore]
Presentation 4 Additional Author: Nick Hart [Director of the Evidence-Based Policymaking Initiative - Bipartisan Policy Center]
Time: Nov 08, 2017 (02:35 PM - 03:15 PM)
Room: Maryland C

Abstract 1 Title: Rubrics-Enhanced Evaluation: A Systematic but Responsive Hybrid of Criterial and Interpretive Evaluation Approaches
Presentation Abstract 1:

Evaluation rubrics methodology is a powerful, flexible, and theoretically grounded approach for responsive evaluation that uses a meaningful blend of criterial and interpretive evaluation. Well-designed rubrics draw on formal and practical evidence, expertise, and experience to create contextually appropriate definitions of “how good is good”, for whom, when, and why. By making this framework explicit, the evaluation team can make systematic and transparent the evaluative reasoning used in the interpretation of evidence. Rubrics lend themselves to a syncretic blend of both probative and experiential inference regarding the quality, value, and importance of not just the evaluand as a whole, but (more importantly) the granular nuances that pervade the findings. This presentation will explain how and when rubrics are not only practical and useful, but are grounded deeply in evaluation theory and help take it to the next level.

Abstract 2 Title: Leveraging the joint discovery of value through collaborative approaches to evaluation (CAE)
Presentation Abstract 2:

‘Collaborative approaches to evaluation’ (CAE) is an umbrella term that implicates a wide range of evaluation approaches all sharing a common feature: They necessarily implicate evaluators working in partnership with program community members to jointly produce evaluative knowledge (Cousins & Chouinard, 2012; Cousins, Shulha & Whitmore, 2013). Shulha et al. (2016) recently developed and validated a set of evidence-based principles to guide practice in CAE. The merit of these principles as leverage for the joint (evaluator/stakeholder) discovery of program value provides the focus for this presentation. Of central interest are two questions: (i) What would be the added value of bringing a CAE lens to the question of program valuation? and (ii) How can the CAE principles help to answer the question, ‘How good is good enough?’ with respect to program interventions?

Abstract 3 Title: Cost-Benefit Analysis: the pearls and pitfalls of monetary valuing in evaluation
Presentation Abstract 3:

In the last few decades Cost-Benefit Analysis (CBA) has come into widespread use to assess the net value of policies and programs. Among economic methods of evaluation, CBA is often regarded as the gold standard because it involves valuing of both costs and consequences in monetary units, allowing their synthesis into a single, fungible indicator of efficiency. Like any method, however, CBA has embedded assumptions and values that are not always made explicit. Consequently CBA accentuates some considerations while reducing the visibility of others. 

This presentation will address both strengths and limitations of CBA. The argument will be made that CBA is a powerful method that evaluators should make more and better use of - and that CBA needs to be handled with care to ensure it enhances the validity, credibility and utility of evaluation findings in the public interest. 

Abstract 4 Title: Reframing Approaches to Assessing Value in Evaluation: Steps to an Integrated Theory of Valuing
Presentation Abstract 4:

Shadish, Cook, & Leviton (1991, p. 37) delineated five components of evaluation (social programming, knowledge construction, valuing, knowledge use, and evaluation practice) and claimed that “Every comprehensive evaluation theory will be better if it explicitly describes and justifies each of these five components.”  By far the least developed of these five is the theory of valuing (e.g., is the program “good”; is it “better” than an alternative?).  As a result, valuing in evaluation practice is disjointed—qualitative approaches to valuing tend not to make use also of the rubrics used by other evaluators for valuing, which in turn are rarely connected with the economic approaches to valuing.  This presentation will connect these approaches by positioning them in terms of three basic distinctions.  These distinctions will be elaborated into a theory of valuing that explains the strengths and limitations of the major approaches in serving different purposes in different contexts.

Theme: Learning to Enhance Evaluation
Audience Level: All Audiences

Session Abstract (150 words): 

Assessing the value of policies and programs (e.g., which policy alternative is “better”?; what changes would “improve” the program?) is recognized as central to evaluation.  However, little progress in developing coherent theories of valuing leaves evaluation practice under-informed with regard to the strengths and limitations of different approaches in different contexts.  It remains largely as Shadish, Cook, & Leviton (1991, p. 49) noted: “Nearly all the theorists in this book agree that evaluation is about determining value, merit, or worth, not just about describing programs.  But few theorists do more.”  This panel brings together six evaluators with substantial experience in different approaches to valuing (different preferred methodologies and roles of evaluators) with the intent of identifying different evaluation purposes and contexts in which these different approaches are most useful.  The goal is to share and build on the latest practice-informed theories of valuing that provide actionable guidance for enhancing evaluation practices.

For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.

Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 16, 2017. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 16, 2017 all sales are final. For Evaluation 2017, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests. The $50 fee will be waived for registrants who planned to travel into the US and experienced international travel issues.