Evaluating Short-Duration Audience Experiences: Challenges and Strategies from Museums, Parks, and Big Screen Films
Session Number: 2691
Track: Arts, Culture, and Audiences
Session Type: Panel
Tags: Informal Education, Informal learning
Session Chair: Audrey Kremer [National Geographic Society]
Presenter 1: Mac Cannady [Lawrence Hall of Science, UC Berkeley]
Presenter 2: Elsa Bailey [Elsa Bailey Consulting]
Presenter 3: Joe E Heimlich [Lifelong Learning Group]
Presenter 4: Ioana Munteanu [National Park Service]
Presenter 5: Audrey Kremer [Manager of Education Evaluation - National Geographic Society]
Time: Nov 12, 2015 (04:45 PM - 06:15 PM)
Abstract 1 Title: Collecting Bioblitz Species Inventories and Participant Impacts Using Multiple Methods
Presentation Abstract 1: A BioBlitz is a 24-hour event in which volunteer scientists, families, students, teachers, and other community members work together in a National Park to find and identify as many species of plants, animals, microbes, and fungi as possible. Participants participate in different groups, for different periods, in different activities, some more intensive than others. The goals of a BioBlitz are to discover, count, map, and learn about the living creatures in the park, help the public understand the work of scientists, and highlight the importance of biodiversity. While the first goal is relatively simple to measure (counts of counts), the others involve learning, and the impact may not be immediate. In evaluating these events, we have created a multi-instrument, multi-timeframe technique: observing individuals in the field interacting with scientists, collecting immediate feedback about the event, and following up a month later to learn the lasting effects of the event.
Abstract 2 Title: The Interactive Observation: A Technique for Evaluating Short-Term Experiences with Museum Exhibit Components
Presentation Abstract 2: Interactive components in museum settings are designed to achieve a variety of outcomes. The exhibit designers and the museum may have particular goals they wish the interactive to achieve. It is critical that evaluators have a clear understanding from the outset about these goals. Rich conversations up front with those involved with the exhibit’s development will increase the probability of reaching a mutual understanding as to the evaluation’s purpose. For example, who are the stakeholders for the evaluation? What does the evaluation hope to achieve, and inform? This presentation, will offer concrete examples of a technique that combines direct observation of visitors at a particular interactive component, followed by a mini interview with the visitor based on that observation. This technique can be tailored toward specific questions stakeholders want to know about the exhibit (e.g. effectiveness of its message, visitor’s thinking processes, questions it raised, and reflections it stimulated etc.)
Abstract 3 Title: Evaluating Short Zoo Interactions Using Even Shorter Evaluation Techniques
Presentation Abstract 3: Zoos have multiple means of communicating messages with two of the dominant being interpretive (sometimes interactive) signage or experiences, and the second as live interpretation. Both approaches are dependent on the visitor for determining length of experience and nature and depth of the exchange. Goals are usually cognitive or affective gain with a conservation action message when possible. For me, the evaluation is driven by the underlying theory of change operating for the exhibition in which the exchange occurs. Measures are very brief, as the episode can be a few minutes to rarely more than 10 minutes unless animals are active, in which case exhibits are ignored and interpretation shifts to answering questions about the animals. Methods must be engaging to compete with visitors’ time budgets and other experiences. Two approaches we use will be discussed: quick response cards for interpretive events and series studies for exhibition areas.
Abstract 4 Title: How to Determine the Success of Zoo Exhibitions: Measuring Zoo Goals, Visitor Goals and their Alignment
Presentation Abstract 4: Typically the success of Zoo’s exhibitions is considered against how successful they were in achieving their intended outcomes. This approach, however, tells only a part of the story of the exhibition’s impact on its visitors. Further it does not specifically address whether it achieved its visitors’ goals. This presentation will address how visitors’ backgrounds, experiences, and their choice of activities help explain their responses to the exhibition and their take-away. Specifically, using a multi-method study that employed observations, in-depth interviews, and pre visit and post visit surveys, this presentation will offer an in-depth understanding of the emergent experiences of visitors in a zoo exhibition, offer insights into whether the exhibition achieved its goals, identify whether the visitors goals and exhibition goals aligned, and offer suggestions for evaluation of success of other exhibitions.
Abstract 5 Title: Evaluating Big Screen Films: What Strategies are We, and Could We, be Using
Presentation Abstract 5: Big screen films come with high aspirations, big price tags, funded by external donors with ambitious goals: inspiring people to protect the environment, encouraging kids to pursue STEM careers, etc. Can a 45-minute film have such an enduring impact? Maybe, but there are few evaluation strategies and little longitudinal research to support it. Can the evaluation techniques typically used for big screen films (surveys and interviews) capture the range of potential audience changes beyond the goals established to develop the film? How do the related "media campaigns" (Karlin & Johnson, 2011) and museum exhibits contribute to the film's impact and how can we capture them? This presentation will describe the challenges facing National Geographic as we attempt to understand the impact that big screen films have on audiences, and the value of interdisciplinary approaches created for other short-duration experiences that we have used to measure and compare learning outcomes.
Audience Level: All Audiences
Are evaluations of learning experiences in museums similar to zoos? To big screen films? To National Park bioblitzes? Can we share techniques? While some differences are expected, the lack of shared vocabulary and common evaluation strategies, makes it difficult to compare findings and develop robust, comparable cross-program evaluation measures. While some of this diversity reflects real differences in the learning characteristics, the lack of coherence hinders synthesis of research findings and the development of reliable measures. (Learning Science in Informal Environments (NRC, 2009).
The panelists will describe evaluation challenges in museum, zoo, and park settings and discuss how these techniques could be used across different audiences and learning goals. The audience will be encouraged to join the discussion about how these evaluation approaches could be used in their settings. This is a first step towards identifying measures that can used to compare learning outcomes consistently, across a range of experiences.
For questions or concerns about your event registration, please contact email@example.com or 202-367-1173.
For questions about your account, membership status, or help logging in, please contact firstname.lastname@example.org.
Cancellation Policy: Refunds less a $50 fee will be granted for requests received in writing prior to 11:59 PM EDT October 6, 2015. Email cancellation requests to email@example.com. Fax request to (202) 367-2173. All refunds are processed after the meeting. After October 6, 2015 all sales are final.