Evaluation 2018: Speaking Truth to Power

View Printable Version

Evaluating Social Impact at the Intersection of the Technology and Nonprofit Sectors

Session Number: 1595
Track: Social Impact Measurement
Session Type: Panel
Tags: nonprofits, philanthropy, technology
Session Chair: Eric Barela [Director, Measurement & Evaluation - Salesforce.org]
Presenter 1: Eric Barela [Director, Measurement & Evaluation - Salesforce.org]
Presenter 2: Dana McCurdy [Senior Strategist, Evaluation - The Wikimedia Foundation]
Presenter 3: Jason Ravitz, Ph.D. [Principal Consultant - Evaluation and Research Professionals]
Presentation 1 Additional Author: Morgan Buras-Finlay [Measurement & Evaluation Manager, Technology& Impact - Salesforce.org]
Time: Nov 02, 2018 (02:15 PM - 03:15 PM)
Room: CC - 10

Abstract 1 Title: Maintaining both coherent processes and mindsets when evaluating social impact in the tech sector
Presentation Abstract 1:

Eric Barela is currently the Measurement & Evaluation Senior Manager at Salesforce.org where he is responsible for building the necessary structures and processes for measuring and evaluating the social impact of the organization’s global philanthropic portfolios.  One of Eric’s challenges is to maintain coherence among the different portfolios, which position the organization as a hybrid of a nonprofit social enterprise, a grantmaking organization, and a corporate social responsibility arm. These portfolios feature a wide variety of stakeholders whose needs must be considered.  In addition to ensuring that measurement and evaluation systems coherently report on the organization’s social impact, Eric has also built and must now maintain an organizational mindset around the necessity and utility of evaluation findings and how they can contribute to a more effective organization.


Abstract 2 Title: Is Big Data is getting in the way of little data?
Presentation Abstract 2:

Dana McCurdy is currently an Evaluation Strategist at the Wikimedia Foundation where she builds the capacity of her colleagues and the many volunteers in the Free Knowledge Movement to evaluate their work and measure their social impact. The Foundation is divided between those traditionally from the corporate technology sector (engineers, product managers, data scientists, etc.) and those typically seen in the nonprofit sector (program managers, program officers, community liaisons, fundraisers, etc.). The biggest challenge Dana grapples with is convincing colleagues of the value in measuring social outcomes when there is such an abundance of online behavior data, such as page views, page edits, bytes added, likes, and comments.


Abstract 3 Title: Learning from evaluations and the programs we fund
Presentation Abstract 3:

Jason Ravitz is Senior Program Manager in University Relations at Google. He leads research communications and coordinates evaluations for some of Google’s largest social impact and technology education programs. A big challenge right now is creating models for evaluation in the relatively unmapped university-corporate relations space. Another is recognizing that evaluations to help programs learn and improve can be just as mission critical as evaluations to measure and communicate impact. Investments in evaluation can add value for program operations and outcomes beyond accountability-focused outcome metrics. It is important to be clear who is expected to use evaluations and how evaluations can be designed so all stakeholder needs are addressed. (This raises questions about speaking truth to power, the conference theme, when the audience or questions being asked seem to be unduly limited).  


Audience Level: All Audiences

Session Abstract (150 words): 

This panel will explore the unique challenges facing evaluators as technology-focused nonprofits and initiatives seek to measure and understand social impact. Each member of the panel works as an internal evaluator at a technology company leading nonprofit initiatives and will share their experience and perspective in this relatively new space for the field of evaluation. Some topics that will be covered include the wide range of power dynamics among technology stakeholders, implications for evaluation methodology in the technology sector, diversity and inclusion efforts at hybrid organizations, and variations in ownership of data. Panelists will share brief presentations on their perspectives and work, followed by a Q&A and group discussion.



For questions or concerns about your event registration, please contact registration@eval.org or 202-367-1173.

For questions about your account, membership status, or help logging in, please contact info@eval.org.



Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 8, 2018. Email cancellation requests to registration@eval.org. All refunds are processed after the meeting. After October 8, 2018 all sales are final. For Evaluation 2018, international attendees and presenters who encounter complications due to the international travel environment will have up to 30 days after the event to request a refund and submit appropriate documentation. No administrative fee will apply for the international requests.