How can internal advocacy evaluators effectively integrate learning and accountability? Shifting the culture in advocacy organizations.

Session Number: 1621
Track: Advocacy and Policy Change
Session Type: Panel
Tags: Accountability, advocacy, Advocacy and policy change, advocacy evaluation, Capacity Development, Designing Evaluation Systems, Evaluation advocacy, evaluation and strategic planning, Learning systems, organizational learning and evaluation, participatory learning and action, performance-based assessment, strategic learning
Session Chair: Carlisle Levine [President and CEO - BLE Solutions, LLC]
Discussant: Julia Coffman [Director - Center for Evaluation Innovation]
Presenter 1: Julia Coffman [Director - Center for Evaluation Innovation]
Presenter 2: Emily Boer Drake [Managing Director - Learning for Action]
Presenter 3: Zehra Mirza [Senior Manager, Impact and Learning - Amnesty International USA]
Presenter 4: Andrew Wells-Dang [Advocacy MEL Manager - CARE USA]
Session Facilitator: Zehra Mirza [Senior Manager, Impact and Learning - Amnesty International USA]
Time: Nov 16, 2019 (08:00 AM - 09:00 AM)
Room: CC M100 J

Abstract 1 Title: I’m One Person. Where Do I Start? Building an Advocacy Evaluation Culture from the Inside Out.
Presentation Abstract 1:

Advocacy organizations that staff evaluation and learning are both rare and celebrated. Internal evaluators generally have more opportunities than external evaluators to leverage sustained change in how advocates get and use data. But both the challenge and the opportunity can be overwhelming since the number of internal evaluation staff typically is small. While advocacy organizations create unique challenges for evaluation and learning—such as staff with limited carrying capacity, unpredictably intense periods of activity, and hard-to-measure outcomes, advocacy organizations also have built-in evaluative assets—such as reflective routines and adaptive mindsets. This session will discuss how internal advocacy evaluators—even where there is only one—can both capitalize on what already exists and build stronger evaluative habits, routines, and data streams. Ideas for where to start and how to think about a strategy for supporting evaluation and learning with advocates will be offered.


Abstract 2 Title: Assessing collaborative performance of advocacy efforts
Presentation Abstract 2:

Advocacy efforts between multiple organizations have become increasingly common as the nonprofit, public, and private sectors seek to solve deep-seated, structural issues facing our communities. Collaboration is by its nature complex—creating alignment within a single organization is challenging, let alone a combination of organizations with different cultures and missions. Yet, the imperative for organizations to advocate collaboratively to solve challenging issues is not going away. So, how do advocates involved in complex collaborations evaluate their progress and growth? Learning for Action has developed a framework for collaborative performance that is customizable, enables self-diagnosis, and creates a roadmap for identifying and making the changes needed to support strong performance. The presenter will introduce the framework, describe the five critical dimensions of collaborative performance (vision, design, governance, learning, and sustainability), and provide an example of how it was successfully used in the context of an advocacy collaborative.


Abstract 3 Title: How do we build organizational capacity for evaluation and learning? Key learnings from an internal evaluator.
Presentation Abstract 3:

Integrating evaluation into the life of an organization is not easy. Internal evaluators must navigate sensitivities and cultural politics, build trust among key leaders, and develop a shared understanding of ‘impact.’ Assessing capacity needs in evaluation and learning is often the first step to ensure buy-in and to develop a framework that is versatile and of pragmatic value. This presentation will share how a theory of change for organizational impact was developed—one that is specific to human rights programming and advocacy campaigns—but also broad enough to be applicable for all organizational units, and not exclusively program teams. Key learnings will be shared on introducing mechanisms within an organization that cultivate evaluation capacity and advance existing evaluation processes. This session will also delve into outcome mapping and why it was selected to track and monitor organizational performance, as well as how it has been received among staff.  


Abstract 4 Title: Learnings from a Flawed MEL System
Presentation Abstract 4:

Following a great advocacy win, we quickly realized our MEL systems weren’t equipping us with the needed information as we didn’t have the data to back our claim around CARE’s leadership and contribution. We began improving our measurement systems to enable us to effectively communicate our work, learn and identify strategic ways forward, and enable staff to use data to strengthen and support their work. We considered the following questions: What aren’t we capturing? What are we capturing that we don’t need? How do we share the data back with staff so they value data input? How do we use data to improve our strategy? By testing new models and approaches, CARE identified a few key tools and approaches that meets our organizational needs, is manageable for staff, and adds value to staff’s work. During this session, we will share our lessons learned and best practices.


Audience Level: All Audiences

Session Abstract (150 words): 

The current political context has set the stage for increased advocacy and has raised the importance of participating in civil discourse. As advocacy organizations elevate their public presence, they need to understand their effectiveness while also adapting to a rapidly shifting context. Many challenges, however, accompany their evaluation efforts. For example, advocacy programming is fueled by systemically-rooted routines and habits that tend to be time- and resource-pressed, and an emotionally-charged culture can act as a barrier to evidence-based reflection and decision-making. This session will discuss considerations and ideas for integrating measurement and learning into advocacy processes, specifically from an internal evaluator’s point of view. The session will delve into frameworks, methodologies, and pragmatic strategies that help reflection and learning to stick, and empower staff to own their performance assessment and internalize the value of monitoring and evaluation.