Evaluation 2016: Evaluation + Design

View Printable Version

Advocacy as a Team Game: Methods for Evaluating Multi-Stakeholder Advocacy Efforts

Session Number: 1285
Track: Advocacy and Policy Change
Session Type: Panel
Tags: advocacy, dashboard, field building, methods, Social media, social network analysis
Session Chair: Jewlya Lynn [CEO - Spark Policy Institute]
Presenter 1: Jared Raynor [Director of Evaluation - TCC Group]
Presenter 2: Rebecca Ochtera [Senior Researcher - Spark Policy Institute ]
Presenter 3: Anne Gienapp [Senior Affiliated Consultant - ORS Impact]
Time: Oct 27, 2016 (08:00 AM - 09:30 AM)
Room: A707

Abstract 1 Title: Designing and using dashboards and group tracking tools
Presentation Abstract 1:

Arriving at agreement on what matters is a core aspect of designing an effective evaluation approach for multi-stakeholders advocacy efforts. This presentation will share examples of the process and result of establishing dashboards for several multi-stakeholder advocacy efforts and will include information on the challenges in both arriving at a useful dashboard and then collecting the relevant data.


Abstract 2 Title: Different techniques for mapping emergent and mature advocacy fields
Presentation Abstract 2:

Over the last three years, advocacy evaluators have come together with advocacy funders to explore a field-building approach for funding, building capacity, and evaluating advocacy efforts. The field-building approach includes a core set of dimensions (field frame, connectivity, skills & resources, composition and adaptive capacity). This presentation will explore methods used to map four emergent fields (fields early in their development with limited connectivity and shared identity) and one mature field. The mapping techniques explored include social network analysis and a variety of visualization techniques, including one using the Advocacy and Policy Framework. The presenter will explore the pros and cons of mapping fields by aggregating individual responses (e.g. identifying the capacity to deploy lobbying based on how many organizations self-report lobbying) vs. self-assessments of overall field effectiveness (e.g. reports of overall effectiveness of the field’s lobbying efforts).


Abstract 3 Title: Tapping into Twitter data through machine learning
Presentation Abstract 3:

Multiple stakeholders involved in diverse communications efforts sought to create a stronger enabling environment for policy change in the education field. An evaluation explored how communications efforts contributed to changes in volume and sentiment of field-level conversation, how key influencers were talking about relevant topics and the extent to which volume and sentiment of conversation had changed over time. To address questions of interest, evaluators engaged with a company that implemented their machine learning application as part of a media analysis. This application of machine learning used natural language processing and text analytics, mixed with manual annotations, to categorize data. The machine learning algorithm detected patterns in language and categorized them to create a model that was applied on a much larger data set than evaluators could code manually. The presenter will share how machine learning was applied to analyze Twitter data in order to address priority evaluation questions.


Audience Level: Intermediate, Advanced

Session Abstract: 

In today’s complex advocacy environment, it is rare that a single organization can pursue goals on their own.  Rather, organizations generally work together, sometimes in very coordinated campaigns and sometimes in less formal networks. Consequently, evaluators have to deal not only with the questions related to the advocacy outcomes, but also the particular roles and contributions of multiple partners that work directly together on some activities and in parallel on others. This session will build on last year’s well-attended and positively reviewed session, beginning with an introduction to the complex issues that must be considered when designing and deploying evaluation methods in multi-stakeholder environments. The speakers will then share specific methods in the context of case studies, including network analysis, field mapping, machine learning in media analysis, and consensus tracking tools and dashboard. The session will end with a discussion of when different methods apply and their strengths and limitations.

 



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 3, 2016. Email cancellation requests to registration@eval.org. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 3, 2016 all sales are final.