What's happening with Tech in Evaluation? A state-of-the-field baseline to launch discussions about the future of evaluation

Session Number: 2519
Track: Presidential Strand
Session Type: Panel
Tags: Artificial Intelligence, big data and evaluation, Blockchain, Internet of Things, Machine Learning, research on evaluation, systematic review, Technology and Evaluation
Session Chair: Linda Raftree [Independent Consultant, Organizer of MERL Tech]
Discussant: Michael A. Harnar, 52478 [Assistant Professor - Western Michigan University]
Presenter 1: Zach Tilton [Doctoral Research Associate - Western Michigan University]
Presenter 2: Michael Bamberger, Dr. [Independent consultant - independent consultant]
Presenter 3: Kerry Bruce [Chief Executive Officer - Clear Outcomes LLC]
Presentation 1 Additional Author: Michael A. Harnar, 52478 [Assistant Professor - Western Michigan University]
Presentation 1 Additional Author: Paul Perrin [Director of Monitoring and Evaluation (Initiative for Global Development) - University of Notre Dame]
Presentation 1 Additional Author: Linda Raftree [Independent Consultant, Organizer of MERL Tech]
Presentation 2 Additional Author: Kecia Bertermann [Director, Learning and Impact - Luminate]
Presentation 2 Additional Author: Alexandra Robinson [Director - Evidence, Learning and Impact - Moonshot Global]
Presentation 2 Additional Author: Grace Lyn Higdon
Presentation 3 Additional Author: Valentine Gandhi Bavanirajan [Head, Research, Evaluation, Innovation and ICT - The Development Cafe / NIRAS Group]
Presentation 3 Additional Author: Joris Vandelanotte [Deputy Director Results and Measurmenet]
Presentation 3 Additional Author: Christian Marie Marin [Program Associate - Clear Outcomes]
Time: Nov 16, 2019 (10:15 AM - 11:00 AM)
Room: CC 200DE

Abstract 1 Title: MERL Tech: A scoping review for orienting evaluation pathfinders
Presentation Abstract 1:

Technology enabled monitoring, evaluation, research, and learning (MERL Tech) is a dynamic swirl of practitioners, researchers, technologists, and evaluators working to improve practice-based fields by integrating technology into their respective transdisciplinary evidence-based work. While those who are aware of this space might convene, share insights, and reflect, to date there has been no attempt to systematically synthesize the emerging evidence base for this swirl of professional activity. This rapid scoping review is a systematic evidence synthesis that seeks to identify gaps in the knowledge base, clarify key concepts, report on the types of evidence that inform practice, and lay the foundation for synthesizing conclusions and lessons-learned. From this evidence-base we can suggest a set of emerging recommendations and practices that can help evaluators better integrate technology into their work. We can also suggest what wider support systems and capacity building efforts would help evaluators and organizations get up to speed.


Presentation 1 Other Authors: Soham Banerji, Gretchen Bruening, Hanna Foster, Jack Gordley, Michele Behr
Abstract 2 Title: Big Data and Evaluation: Springboarding from our current state to our desired future state
Presentation Abstract 2:

Over the past 5 years, Big Data has gone from being a buzz word heard in specialized sectors or uttered by data scientists to a term that is widely referred to in evaluation circles. But as one infamous speaker said. “It’s like teen sex - everyone’s talking about it but no one is really doing it.”  And to expand the metaphor further, it’s been unclear whether those who are actually doing it are doing it well… The recent “state of the field” sub-report on Big Data explores what evaluators and socially-focused organizations are doing in this area, and provides a more nuanced exploration of how big data is used for evaluation. From this baseline of what is actually happening, we can begin to identify gaps, whether in capacity or in ethics, and explore how the AEA can support evaluators and those that they evaluate to better and more responsibly use big data in their work.


Abstract 3 Title: Reports from Evaluation's Frontier: lessons from emerging tech-enabled evaluations
Presentation Abstract 3:

What are the findings from the frontier of technology-enabled monitoring, evaluation, research and learning that can give us leading indications of the shape of the evaluation future to come? From an exploratory study combining rich case studies, this “state-of-the-field” sub-report provides detail on emerging technologies being applied to the digital fore for a conversation on sustainable pathways to the future of evaluation. Lessons from this report will be brought into conversation with each other including findings from projects that employed: drones and satellites for social impact assessment; internet-of-things and sensors for real-time monitoring and evaluation; distributed ledgers for data collection, analysis, and storage; and machine learning and natural language processing for text analysis and evaluation. Audience members can expect to be apprised of the latest evidence and insights from the newest of what tech-enabled evaluation has to offer evidence-based practice and to participate in a discussion of where these new technologies may lead the field of evaluation.


Audience Level: All Audiences

Session Abstract (150 words): 

So much has changed over the past five years! Rapid advances in technology and data have altered the field to the point where most of us can’t imagine conducting an evaluation without the aid of digital devices and digital data. The rosy picture of the digital data revolution and an expanded capacity for decision-making based on digital data has been clouded, however, with legitimate concerns about how new technologies, devices, and platforms — and the data they generate — can lead to unintended negative consequences or be used to harm individuals, groups and societies. This session will offer a high-level overview of recent research on the “State of the Field” of technology in monitoring, evaluation, research and learning and lead into a discussion about what the AEA and its members can do to carve out future pathways for responsible and inclusive tech-enabled evaluation practice.