AEA is excited to present 16 workshops at Evaluation 2025: Engaging Community, Sharing Leadership. Register here to secure your spot; these workshops are an additional cost.
Seats are limited for these workshops. All times listed below are in Central Time (CT).
View Workshops by Type:
Monday, November 10
Workshop 1141: DEVELOPMENTAL EVALUATION: Community-based, Shared Leadership Principles of Engagement (Day 1 of 2)
Speakers: Michael Patton; Charmagne Campbell-Patton
9:00 a.m. - 4:00 p.m.
This is a two-day workshop spanning November 10-11.
Developmental Evaluation guides adaptive action, innovative initiatives, and systems change in complex dynamic environments. The essentials of this approach will be presented, examined, and applied. Participants will learn to distinguish Developmental Evaluation from other evaluation approaches with special attention to community-engaged, shared leadership applications based on utilization-focused and co-creation principles. The course will include special attention to the relevance and implications of evaluation adaptability and agility in times of uncertainty and turbulence, as in the global pandemic, the accelerating climate emergency, and resilience in the face of policies that oppose equity and sustainability. The course will cover the niche, nature, and principles of developmental evaluation; purposes and applications at local and international levels; the particular challenges, strengths, and weaknesses of this approach; essential principles and practices for designing and conducting developmental evaluations; case examples; and use of the new ADAPT framework.
Tuesday, November 11
Two-Day Workshop
Workshop 1141: DEVELOPMENTAL EVALUATION: Community-based, Shared Leadership Principles of Engagement (Day 2 of 2)
Speakers: Michael Patton; Charmagne Campbell-Patton
9:00 a.m. - 4:00 p.m.
Please see the description above in day 1. This is a two-day workshop spanning November 10-11.
Workshop 1063: Project Management for Evaluators: Enhancing Community Engagement and Leadership in Evaluation Projects
Speaker: Marcel Chiranov
9:00 a.m. - 4:00 p.m.
Evaluators frequently work within the constraints of time, budget, and stakeholder expectations, yet many lack formal training in project management—a discipline designed to optimize resources, mitigate risks, and ensure successful project outcomes. Evaluations themselves are complex projects that require careful planning, execution, and adaptive management. This workshop demonstrates how integrating project management principles into evaluation practice enhances efficiency, stakeholder engagement, and the overall impact of evaluation findings.
The need for project management in evaluation has become increasingly clear, especially in complex, multi-stakeholder environments such as international development, public policy, and social impact. USAID’s Automated Directives System (ADS) 201 emphasizes the importance of structured evaluation planning and adaptive management, aligning well with project management methodologies like Agile and risk-based planning. Similarly, OECD-DAC’s evaluation principles stress the necessity of strategic planning and stakeholder engagement—both core elements of project management.
Michael Quinn Patton’s (2010) work on Developmental Evaluation highlights the need for flexibility and iterative learning, a concept strongly aligned with Agile project management. The rapid evolution of programs in response to real-world conditions calls for evaluators to adopt project management tools such as Gantt charts, work breakdown structures (WBS), risk matrices, and stakeholder mapping techniques to ensure that evaluations remain relevant, timely, and actionable.
Key benefits of integrating project management into evaluation:
- Better planning & execution: Evaluation projects, like any other projects, require a structured approach to ensure alignment with strategic goals. Project management training equips evaluators with tools to define scope, set realistic timelines, and allocate resources efficiently.
- Improved risk management: Evaluations often face challenges such as limited access to data, shifting stakeholder priorities, and external disruptions. A risk-based approach helps evaluators identify potential pitfalls early and develop contingency plans, reducing disruptions and increasing evaluation reliability.
- Enhanced stakeholder engagement: Project management frameworks offer systematic stakeholder mapping and engagement strategies, improving communication and ensuring that evaluation findings are used effectively for decision-making.
- Agile & adaptive learning: Evaluation is not just about reporting but also about facilitating real-time learning and adaptation. Agile project management principles provide practical strategies for iterative feedback loops, enabling evaluation teams to adjust approaches dynamically rather than waiting until the final reporting phase.
- Increased efficiency & impact: By integrating project management principles, evaluators can reduce inefficiencies, improve team coordination, and deliver timely, actionable insights to decision-makers, thereby enhancing the strategic value of evaluations.
Conclusion
Evaluation professionals must move beyond traditional methodologies and embrace structured project management approaches to navigate today’s complex evaluation landscapes effectively. This workshop provides a hands-on, interactive training to equip evaluators with practical project management tools and techniques. Participants will engage in case studies, stakeholder analysis exercises, and risk management planning to develop immediately applicable skills for their evaluation work.
This session is particularly valuable for M&E specialists, evaluation consultants, and program managers looking to enhance their ability to deliver high-quality, high-impact evaluations. By the end of the workshop, participants will leave with concrete strategies to plan, execute, and manage evaluations more effectively, ensuring better outcomes for the organizations and communities.
Workshop 1075: Dabbling in the Data: Hands-On Data Analysis Activities for Teams
Speaker: Corey Newhouse
Time: 9:00 a.m. - 4:00 p.m.
As evaluators, we are at our best when interest holders use our findings to get better at what they do; meaning making with data is a powerful way to achieve this goal. However, data analysis can seem like a daunting task for many in mission-driven organizations, requiring specialized knowledge and years of training. It can be challenging for evaluators to engage teams in this situation, limiting their ability to meaningfully engage with interest holders. Public Profit wrote Dabbling in the Data: A Hands-On Guide to Participatory Data Analysis to give evaluators, and the folks they work with, a jumpstart in interpreting data collaboratively.
In this professional development workshop, participants will get substantial hands-on experience with Dabbling activities, ranging from simple "get to know your data" activities to more complex methods. We'll discuss the ways in which participatory data analysis promotes more rigorous evaluation practice by engaging interest holders in meaning-making. Finally, we'll explore the scenarios in which participatory data analysis makes the most sense, and share facilitation tips for different settings. Participants will start their own Dabbling action plan during this workshop.
Session participants will receive a complimentary copy of the Dabbling guide, along with detailed planning worksheets.
Workshop 1065: Transformative Mixed Methods Evaluation: Commitment to Equity, Community Engagement, and Shared Leadership
Speaker: Donna Mertens
9:00 a.m. - 4:00 p.m.
AEA has made a commitment to diversity, equity, inclusion and belonging. This workshop combines the need to act upon that commitment through an exploration of the transformative framework that can be used to design strategies to address inequities in the world by the way they design their evaluations. Transformative mixed methods designs are explicitly constructed to serve this purpose. The transformative epistemological assumption directly focuses on challenging exiting power structures and engaging with communities in ways that are culturally responsive. This workshop is designed to teach the use mixed methods for transformative purposes to better address the needs of members of marginalized communities, such as women, people with disabilities, people living in poverty, racial/ethnic minorities, sexual minorities, and religious minorities. Participants will learn how to use a transformative lens to identify those aspects of culture and societal structures that support continued oppression and how to apply mixed methods designs to contribute to social transformation. Interactive learning strategies will be used including whole group discussion and working in small groups to apply the design of a transformative mixed methods evaluation to a case study.
My decision to propose a full day workshop on this topic is based on feedback from previous presentations at AEA and elsewhere in which participations indicated that they needed more time to apply the concepts of the transformative paradigm. (I previously presented this as a half day workshop). Given the political climate, the workshop will provide evaluators with a space to consider consequences of the changes in policy that have brought issues of racism, sexism, ableism, and homophobia into the spotlight. There are other opportunities for evaluators to learn about generic mixed methods approaches but my passion and expertise is in transformative mixed methods. Mixed methods design has become more sophisticated and has developed far beyond the idea of combining focus groups with surveys. The transformative approach to mixed methods is recognized as one of the major frameworks for conducting mixed methods studies. It is the only framework that starts with the ethical assumption that our evaluation work needs to explicitly address the issues of discrimination and oppression and that we can provide a basis for transformative actions that increase justice in the world.
Workshop 1098: Approaches to Evaluating Programs
Speakers: Lyssa Becho, Biance Montrosse-Moorhead; Daniela Schroeter
9:00 a.m. - 4:00 p.m.
Credible and useful evaluations require selecting appropriate evaluation models or approaches. Research has shown that choosing an approach can be challenging for various reasons. This course will make these decisions more manageable and transparent. Facilitators will cover essential concepts in evaluation and focus on the characteristics that distinguish evaluation approaches in practice. Participants will learn about multiple evaluation approaches using the Garden of Evaluation Approaches, an empirically based framework appearing in several evaluation journals. This framework describes the roles of values, valuing, social justice, context, use, engagement, and power dynamics in evaluation. The course combines interactive lectures, hands-on exercises, and case-based applications to ensure a dynamic and engaging learning experience that facilitates the use of what will be learned. Participants will also reflect on their own evaluation practice, preferences, and assumptions. Using the Garden of Evaluation Approaches, participants will create their own personal evaluation practice flower. This process encourages practitioners to think critically about their implicit choices when designing evaluations and visualize their practice in a new way.
This course is intended for graduate students, early career scholars and practitioners, experienced evaluators and researchers, and evaluation commissioners interested in updating their evaluation knowledge and skills. Participants would benefit from having a basic understanding of research design, but it is not a prerequisite. By the end of the course, attendees will be able to (a) describe the unique dimensions of evaluation practice, (b) apply different evaluation approaches in practice, and (c) judge the usefulness of various evaluation approaches in different contexts.
Participants should bring a laptop or tablet. All necessary materials, including case studies and handouts, will be provided.
Workshop 1137: Consulting 101: An introductory workshop for evaluators who want to start consulting practices
Speaker: Matthew Feldmann
9:00 a.m. - 4:00 p.m.
Independent consulting. Side hustle. Entrepreneur. Small business owner. Evalpreneur. If these terms resonate with you and your goals, then this workshop is for you. More than 20% of AEA members are independent or have independent consulting side jobs. This workshop will provide you with key understandings to initiate an independent consulting practice including niche identification, marketing approaches, organizational structures, and finances. Laura Keene and Matt Feldmann both have thriving consulting practices and will share their insights for how you can develop your practice through valuable samples, worksheets, and insider tips. If you need some help getting started with consulting, this is the place for you.
Wednesday, November 12
Workshop 798: PARADISE BY THE DASHBOARD LIGHT: A Crash Course in Power BI
Speaker: Joe Travers
8:00 a.m. - 10:45 a.m.
Curious about Microsoft's Power BI data dashboard software but not sure how to learn it easily and use it with your evaluation data? Power BI is extremely powerful, but can be difficult to learn when you’re first starting out.
This workshop takes you from complete novice to complete confidence in knowing how to turn data into a dashboard - connecting to data, cleaning data (when needed), making charts and visuals, and making a dashboard that immediately answers the questions your report audience has about the data.
This workshop is 100% hands-on! We'll all make a simple (and beautiful) dashboard with some sample evaluation survey data together.
While this workshop has appeared at AEA before, it improves every year by incorporating feedback from the hundreds of past participants! It's always getting better!
Participants MUST have a laptop with a recent version of Power BI Desktop installed. Power BI is a Windows-only program. It can be downloaded for free from https://powerbi.microsoft.com/en-us/desktop/ or the Microsoft Store. Workshop data will be provided to participants before the conference (if possible) or at the workshop itself.
Workshop info and material also available at https://traversdata.com/aea2025/.
Workshop 1151: Unleashing the Power of Power BI: Transforming Evaluation Data into Actionable Insights
Speakers: Kim Ho; Cathy Lessesne; Michele Sadler; Amy Shim; Leighton Vila
8:00 a.m. - 10:45 a.m.
Power BI is a dynamic, user-friendly data visualization platform that is used widely across disciplines to yield actionable insights. For evaluators, this tool can be leveraged to connect to various data sources and visualize data to effectively evaluate a given program or service. This interactive workshop is designed for first-time users looking to transform evaluation data into impactful data visualizations. This workshop will include: 1) an overview of the Power BI software and its functionalities; 2) use cases of how Power BI can be used to support evaluation efforts; 3) hands-on practice cases using Power BI; and 4) group discussions around the effective use of Power BI in program evaluations. Evaluation concepts are interwoven throughout the workshop, such as defining the purpose, users, and use of the dashboard; defining evaluation questions and data needs; and identifying the visualizations that work best to answer evaluation questions.
Workshop 1067: Empowerment Evaluation: A Community Engagement Approach
Speaker: David Fetterman
8:00 a.m. - 10:45 a.m.
Empowerment evaluation is a interest holder involvement approach to evaluation. It is aimed at learning and improvement. Empowerment evaluations help people learn how to help themselves and become more self-determined, by learning how to evaluate their own programs and initiatives. Key concepts include a critical friend (evaluator helping to guide community evaluations), cycles of reflection and action, and a community of learners. Principles guiding empowerment evaluation range from improvement to capacity building and accountability. The basic steps of empowerment evaluation include: 1) mission: establishing a unifying purpose; 2) taking stock: measuring growth and improvement; and 3) planning for the future: establishing goals and strategies to achieve objectives, as well as credible evidence to monitor change. An evaluation dashboard is used to compare actual performance with quarterly milestones and annual goals. The role of the evaluator is that of a coach or facilitator in an empowerment evaluation since the group is in charge of the evaluation itself. The workshop is open to colleagues new to evaluation as well as seasoned evaluators. It highlights how empowerment evaluation produces measurable outcomes with social justice-oriented case examples ranging from eliminating tuberculosis in India to fighting for food justice throughout the United States. Additional examples include empowerment evaluations conducted with high-tech companies such as Google and Hewlett-Packard as well as work conducted in rural Arkansas and squatter settlements in South Africa. Employing lectures, activities, demonstrations, and discussions, the workshop will introduce the theory, concepts, principles, steps of empowerment evaluation, and technological tools of the trade. (See TED Talk about empowerment evaluation for more details.)
Workshop 1093: Beyond Do No Harm: Implementing Trauma and Resiliency Informed Accessible Evaluation
Speakers: Tasha Parker; Dulcinea Rakestraw
8:00 a.m. - 10:45 a.m.
Evaluation practitioners increasingly recognize that traditional approaches may unintentionally perpetuate harm, particularly when working with communities experiencing historical trauma and ongoing inequities. While "do no harm" has long been a foundational principle, today's complex challenges demand approaches that actively contribute to healing, equity, and liberation.
This interactive workshop introduces the Trauma and Resiliency Informed Equitable Evaluation and Approaches (TRIEE-A) framework, an innovative methodology that bridges trauma science, resilience theory, and justice-centered evaluation practices. Developed through years of field implementation across diverse contexts, TRIEE-A provides evaluators with practical strategies for transforming every phase of the evaluation process.
Participants will explore how trauma manifests across ecological systems and impacts evaluation engagement, examining both individual and collective dimensions. Through case studies and interactive exercises, the workshop demonstrates how to integrate TRIEE-A's six core principles: Safety; Trustworthiness & Transparency; Liberation, Voice & Choice; Collaboration & Mutuality; Cultural, Historical & Gender Awareness; and Peer Support & Collective Care.
The workshop begins by establishing a foundation in trauma science, introducing participants to the various forms of trauma that can influence evaluation work—from individual experiences to historical and collective trauma. We examine how traditional evaluation approaches may inadvertently replicate harmful power dynamics or trigger trauma responses, and how TRIEE-A offers an alternative path forward.
Through guided activities, participants will learn to implement the TRIEE-A framework across the entire evaluation cycle. Key topics include creating physical and psychological safety protocols, developing transparent communication strategies, implementing authentic power-sharing mechanisms, building collaborative governance structures, addressing historical context, and incorporating peer support systems.
The session provides concrete strategies for implementing trauma-responsive, equitable practices throughout the evaluation lifecycle—from community engagement and design through data collection, analysis, and utilization. Particular attention is given to navigating power dynamics, centering community wisdom, and creating methodologies that balance rigor with accessibility.
Participants will examine real-world case scenarios to identify potential trauma triggers in evaluation contexts and design trauma-responsive alternatives. Through collaborative problem-solving activities, attendees will develop practical tools they can immediately apply in their practice, regardless of context or methodology. The workshop balances theoretical foundations with hands-on application, ensuring evaluators leave with both conceptual understanding and practical skills.
By integrating trauma science with liberation-focused evaluation approaches, TRIEE-A creates pathways for evaluation practice that not only avoids harm but actively contributes to healing and transformation. The workshop addresses common implementation challenges and provides strategies for navigating institutional constraints while remaining true to TRIEE-A principles.
This workshop is ideal for evaluators seeking to enhance their practice with approaches that recognize the pervasive impact of trauma while centering healing, equity, and transformative change. Whether working in community-based settings, government agencies, foundations, or educational institutions, participants will gain valuable tools for conducting evaluations that honor the complexity of human experience and contribute to more just, equitable outcomes.
Join us for this transformative workshop and become part of a growing community of practice dedicated to evolving evaluation methodologies that promote healing, equity, and liberation through trauma-responsive approaches.
Workshop 1073: Are you ready? Using Foresight to Future-Proof Your Evaluation Practice
Speakers: Taylor Anderson; Annette Gardner; Rhonda Schlangen; Kathleen Sullivan
8:00 a.m. - 10:45 a.m.
The evaluation field, our clients, and program participants are wrestling with fast-paced and disruptive change on several levels–elimination of entire government programs, elimination of funding in the sciences and a shift away from evidence-based learning, and chaos in government and decision-making. Not only will the future continue to be volatile, uncertain, complex, and ambiguous (VUCA) but we have shifted to a crisis-mode to deal with unprecedented change and uncertainty. To successfully meet the moment, we recommend embracing a foresight mindset that anticipates and gets in front of future challenges, feeding that information back into evaluation design, tools, and measures, and strategy. It is no longer enough to rely on the past in evaluation design and implementation–it provides too rosy an outlook that is out of touch with current reality. This entails adding a new set of skills to our evaluation toolbox that have been used for decades in the public and private sector. Futures studies, the discipline, and foresight, a capacity, provides a rigorous and proven set of tools to perceive, make sense of, and act upon ideas and information about the future, strengthening strategy and decision-making.
In this Professional Development session, we present and demonstrate foresight constructs and methods that can increase one’s awareness and understanding of the impacts of change and are tools for action in times of extreme change. Participants will work with trends or observable forces for change shaping possible futures as well as with ‘weak signals’ (emerging trends or disruptions) that may significantly impact the future. Second, they will learn how to work with the Futures Wheel to systematically explore the implications of these trends and their roles in shaping a program or policy. Third, participants will work with alternative scenarios to stress-test evaluation designs and recommendations under extreme conditions, as well as develop solutions to improve the ‘fit’ of these designs/recommendations. While we may not be able to prevent the disruptions that are occurring on an hourly basis, we can shift from a reactionary mode to a proactive, structured mode to understanding these disruptions and developing novel solutions and resilient strategies.
Workshop 1122: From Data to Decisions: Practical Sensemaking Tools for Shared Understanding and Impactful Evaluation
Speakers: Jennifer Billman; Christopher Cox; Kathleen Dean
11:30 a.m. - 2:15 p.m.
In evaluation, data alone is not enough—what matters is how we make sense of it. Sensemaking is a structured, participatory process that helps evaluators and project teams work together to interpret data, surface assumptions, and generate shared understanding. By fostering deeper engagement and collaboration, sensemaking strengthens evaluation use and ensures findings are relevant and actionable. When used effectively, sensemaking transforms evaluation from a reporting exercise into a dynamic process that engages communities, supports shared leadership, and fosters transformative change—the core pillars of AEA’s 2025 theme.
This workshop introduces practical sensemaking tools that support evaluation across diverse contexts, from small-scale initiatives to large, multi-site projects. Participants will explore tools for tracking project activities, facilitating structured discussions, improving decision-making, and creating opportunities for shared leadership in evaluation.
Participants will explore the following tested tools for:
- Task Team Orientation – Support project teams in anticipating and prioritizing future work by integrating sensemaking into annual reflection and planning.
- Consensus Building – Use participatory decision-making tools to empower project team in shaping project direction and advancing shared leadership of the evaluation
- Collaborative Tracking – Practical ways to use spreadsheets as dynamic tools for real-time data reflection, project activity tracking, and team collaboration in evaluation.
- Online Interaction – Leverage digital platforms to gather multi-perspective input, enhance transparency, and strengthen collective decision-making.
Through hands-on exercises, participants will engage with spreadsheet templates and structured tools to support project monitoring and data-driven reflection. They will explore case studies where these approaches have been used to foster shared understanding across teams, improve transparency, and align evaluation with decision-making.
The session directly aligns with AEA 2025’s theme of Engaging Community, Sharing Leadership by providing tools that:
- Support transformative evaluation by helping teams track progress, visualize trends, and connect data to decisions in real-time.
- Engage communities by structuring interactions that invite broad participation and collaborative reflection on project data.
- Share leadership by distributing responsibility for tracking, interpreting, and acting on evaluation insights across teams.
Rather than simply discussing collaborative and participatory evaluation, this session provides tangible tools for applying these principles in practice. Participants will leave with immediately useful approaches for leveraging various technologies and tracking tools to enhance project reflection, decision-making, and evaluation impact, whether working with community-driven initiatives, organizational learning efforts, or large-scale federally funded projects.
Workshop 1089: Project Monitoring and Evaluation Planning Using a Theory of Change: A Practical, Step-by-Step Guide
Speakers: Todd Anderson; MiKell Brough-Stevenson; Jerome Gallagher
11:30 a.m. - 2:15 p.m.
Monitoring and evaluation are essential for ensuring effective and accountable public and non-profit projects. This half-day workshop is designed for professionals seeking to build skills for developing a plan to monitor and evaluate a project. Building on fundamental concepts of theories of change and logic models, the workshop focuses on using these tools as a foundation to monitor and evaluate both project performance – whether a project is achieving its intended outputs and outcomes – and project context – examining how external factors may influence project performance or be unintentionally affected by the project.
Through an interactive case study, participants will follow the journey of an M&E specialist tasked with developing a monitoring and evaluation plan for a public/non-profit project. Concrete exercises and discussions will help participants apply key concepts and methods for both performance and context monitoring, and for identifying opportunities for evaluation.
The workshop begins with a short introduction (or re-introduction for more advanced participants) to core concepts of theories of change, performance monitoring, context monitoring, and evaluation. Participants will then work through a stepwise process to develop a monitoring and evaluation plan for the case study. They will first explore how to refine and adapt a project design and its (explicit or implicit) theory of change to serve as a foundation for a monitoring and evaluation plan. This includes developing or restructuring a logic model that articulates expected outputs and outcomes in greater detail. Next, participants will engage in interactive exercises to identify potential indicators and qualitative methods for monitoring and evaluating both performance and context across the logic model. Discussions will focus on different methodological approaches, brainstorming techniques, and heuristics for ensuring a comprehensive project monitoring and evaluation plan.
As the session progresses, participants will examine methods for analyzing and interpreting monitoring data to assess project progress, identify contextual risks, and inform project adaptation. They will also engage with challenges embedded in the case study that will require them to examine constraints and consider refinements, priorities, and trade-offs in their final monitoring and evaluation plan.
By the end of the workshop, participants will have co-developed key elements of a project monitoring and evaluation plan that emphasizes learning and adaptation while recognizing trade-offs, budgetary constraints, and project timeframes. The ultimate goal is to promote a holistic yet realistic approach to project monitoring and evaluation, integrating quantitative and qualitative methods while focusing on the most critical information for effective project management.
This engaging and participatory workshop is ideal for M&E specialists and managers seeking practical skills in designing project monitoring and evaluation plans. Participants will leave with a structured approach to monitoring and evaluation planning that enhances accountability, facilitates adaptive management, and improves project effectiveness.
Workshop 1107: Theory, Practice, and Praxis for Liberatory LGBTQ+ Evaluation
Speakers: Erik Elias Elias Glenn; Esrea Perez-Bill; Gregory Phillips
11:30 a.m. - 2:15 p.m.
This advanced, project-based workshop offers a theoretical, practical, and justice-focused approach to what we define as LGBTQ+ Evaluation. More than just content knowledge, LGBTQ+ Evaluation is a distinct framework and body of theory that shapes evaluation practice.
Designed for hands-on, collaborative learning, this workshop provides participants with opportunities to critically reflect on and build upon LGBTQ+ liberation within their own evaluation practices. Using an inquiry-based approach, attendees will examine what it means—in theory, practice, and praxis—to conduct culturally responsive, equitable, and transformative evaluations with LGBTQ+ communities.
Guided by liberatory adult education theories and texts, such as Pedagogy of the Oppressed, the workshop curriculum emphasizes anti-oppressive practices, creativity, and experiential storytelling. Participants will be encouraged to experiment with innovative evaluation methods and engage in meaningful dialogue, fostering a deeper, justice-oriented understanding of LGBTQ+ Evaluation.
Workshop 1144: Deep Learning and Collaborative Evaluation: Tools, Ethics, and Opportunities in the Age of AI
Speaker: Michael Osei
11:30 a.m. - 2:15 p.m.
As digital transformation reshapes programs, policies, and services, evaluators increasingly encounter complex, high-volume data and novel sources of insight. Deep learning—a branch of artificial intelligence (AI)—offers tools for pattern recognition, predictive modeling, and automated data processing that can enhance evaluation practice. A collaborative evaluation lens offers a crucial bridge, helping evaluators interpret these tools in context, promote shared understanding, and prioritize community values.
This preconference training introduces deep learning through a collaborative evaluation framework. Designed for professionals with limited or no technical background, the session explains concepts such as neural networks, model training, supervised versus unsupervised learning, and interpretability. Participants will examine case examples from education, health, and social impact evaluations—such as text analysis, image classification, and time-series prediction. The session includes interactive demonstrations and group activities (no coding required), with equity, transparency, and inclusion integrated throughout. Participants will gain the tools to assess deep learning methods, co-interpret results with stakeholders, and lead ethical, collaborative conversations about data.
Workshop 1068: The Basics of Using Theory to Improve Evaluation Practice
Speakers: John Lavelle; Stewart Donaldson
11:30 a.m. - 2:15 p.m.
This workshop is designed to provide practicing evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. Lecture, exercises, and discussions will help participants learn how to apply evaluation theories, social science theories, and stakeholder theories of change to improve the accuracy and usefulness of their evaluations. A range of examples from evaluation practice will be provided to illustrate main points and take-home messages.