Science exhibitions: 4 key factors that make a difference to both the engagement of visitors and their learning experience
I was lucky enough to visit the Exploratorium: The Museum of Science, Art and Human Perception in San Francisco in 2018 as part of a senior science trip at my high school, and spent about 4 hours there fully engrossed in the exhibits. Dave Okey, my colleague who organised the trip, described the Exploratorium as “Motat on steroids”. After visiting, I had to agree. It is essentially one large hall on the edge of San Francisco Bay, filled with hundreds of exhibits that each invite the visitor to touch, play and interact as a means to construct understanding of real phenomena. It was incredible.
The Exploratorium is famous for these hands-on exhibits. Their designs are based around a simple inquiry cycle in which visitors first encounter a surprising phenomenon that they are then able to explore via some sort of interaction. Once their curiosity has been established, a label or panel then explains what causes the phenomenon, and how it is relevant to us in the real world. This does tend to limit the focus of exhibits to those aspects of science that are easy to explore in short bursts of attention, and excludes those that require memorisation or complex thought. Despite this, the Exploratorium has revolutionised science education both in schools and public spaces. Instead of telling visitors about science, they are encouraged to explore and discover for themselves.
In this article, Sue Allen summarises her key findings from her 10 years as a member of the department of visitor research and evaluation at the Exploratorium. Museums like the Exploratorium are free-choice learning environments in which visitors decide what they will interact with and for how long. Exhibition designers must successfully navigate the tension of creating engaging exhibits that promote learning while remaining accessible to a diversity of visitors. The Exploratorium makes all its own exhibits and is able to carry out extensive research into visitor engagement and learning in response to multiple ways the same phenomenon could be demonstrated in an exhibit. The article is definitely worth reading for those of us looking to develop or evaluate science exhibits. I am interested in identifying possible criteria for evaluating exhibits and to drill down into the elements of design that should be considered when developing exhibits.
Allen’s research team has identified 4 key factors that make a difference to both the engagement of visitors and their learning experience. These are summarised below:
The first of these factors Allen terms “immediate apprehendability”. This essentially refers to exhibits, exhibitions or spaces whose function, properties or purpose are easily understood without conscious effort. Immediate apprehendability reduces the cognitive overload caused by exposure to many new and interesting things all at once. Typically the brain can only remain properly focused for about 30 minutes in a new and highly stimulating environment before cognitive overload causes fatigue and disengagement. (You may have experienced this before yourself - it is known in the industry as “museum fatigue). Immediate apprehendability on all scales has been shown to reduce this issue by ensuring visitor comfort and effortless understanding of the space and exhibits.
One strategy for improving immediate apprehendability is user centered design. This is design that reduces choices of how to interact with an object or exhibit through its shape, location or familiarity. This effect can be reinforced throughout whole exhibitions or the whole museum by repeating designs that have the same function to create a visual code.
Another strategy Allen identifies is the use of familiar activities as schemas such as competitions as a way to interact with phenomena as a matter of course. In carrying out a familiar activity such as a race, visitors will learn the differences between objects or techniques quickly and intuitively. The competition provides an engaging hook to prompt this type of exploration which might otherwise be perceived as uninteresting.
The final strategy described for reducing cognitive overload and improving immediate apprehendability is to focus on visitor orientation and comfort. On a macroscale, this includes improving orientation and wayfinding within the whole museum, as well as generous seating and refreshment options. On a smaller scale, it includes making it obvious whether a lever in an exhibit is to be pulled or pushed, and ensuring that interacting with exhibits is easy and comfortable.
The second key factor Allen discusses is the physical interactivity of exhibits, which is considered by many to be essential for learning in science museums. Her research confirms that interactivity does indeed promote engagement, understanding, and recall of exhibits among visitors and increases holding time in the exhibitions. Interestingly, her research also shows that while interactivity is a useful tool for engagement and learning, it does not necessarily follow that greater interactivity leads to greater engagement and learning. Instead, simple interactive options were most effective, as too many interactive elements tended to obscure the primary phenomenon being shown and confuse visitors.
In addition, it is not the case that all successful exhibits facilitated interactivity. Some of the Exploratorium's most successful exhibits (in terms of holding time and learning conversations) offered no interactivity at all, other than observation. This is an important reminder that while interactivity is a fantastic learning tool, it is by no means the only way to create engaging and memorable learning experiences in museums.
Allen also comments on the importance of ensuring that the interactivity promotes thought about the phenomenon in question. Being hands-on and fun does not always mean that an exhibit is facilitating the mental inquiry that is intended. Evaluation of exhibits should therefore consider minds-on as well as hands-on interactivity.
One key measure of the success of exhibits is active prolonged engagement (APE). Allen’s research has shown that APE is promoted when exhibits allow use by more than one member of a visiting group at a time. Instead of waiting in line for individual use, group interaction with the same exhibit promotes learning conversations and greater interest.
The third key factor discussed is conceptual coherence within exhibitions or galleries. The team found that some themes were much more easily grasped than others by visitors, and that understanding the intended theme was not necessarily linked to overall enjoyment but was an indicator of learning outcomes. Allen acknowledges that the Exploratorium has traditionally prioritised phenomenological themes over abstract themes, and that abstract themes were generally harder to convey. The team’s research has led to improved visitor perception of abstract themes through careful design that considers selection and sequencing of exhibits, along with partitioning and use of visual organisers within themed galleries.
In addition to differences between phenomenological versus abstract themes, Allen’s team found that themes which cohere with schemes and models of commonly understood science concepts were much more likely to be correctly perceived by visitors than themes which contradicted these or introduced new models.
The fourth key factor Allen discusses is design that is inclusive to a diversity of learners. She identifies a number of strategies and considerations that improve the inclusivity of exhibits. The first of these is to accommodate a range of learning styles and offer a range of sensory experiences relating to the same phenomenon or theme. This can be offered over multiple exhibits or exhibits can be multimodal, meaning they appeal to different learning styles and levels of knowledge simultaneously. Alternatively, exhibits that adhere to universal design principles are usable by all without the need for adaptation or specialised design, and provide accessibility to visitors of any physical and intellectual capability.
Another strategy for inclusivity is to provide a diversity of spaces within an exhibition or museum. In a busy hands-on museum like the Exploratorium, noise and activity can be overwhelming to some visitors. Evaluation results have shown that provision of quieter, partitioned spaces were noticed and valued by visitors. Lastly, Allen discusses the use of narratives for engaging diverse audiences, which is a particularly successful strategy in historical and cultural museums. She finds that narratives do not engage visitors as well in a science museum especially for phenomenological themes that don’t have an obvious emotional significance common to many people.
Allen concludes that research and evaluation form an essential part of the design process for effective science exhibits and exhibition spaces. She also notes that there is unlikely to ever be a single set of design principles that could remove the need for research and evaluation, but that consideration of the 4 key factors that affect both the engagement of visitors and their learning experience are a good place to start: immediate apprehendability, physical interactivity, conceptual coherence, and diversity of learners.
Given the upcoming tasks of first evaluating a science exhibit, and later designing one, I am working on a set of questions I can ask myself or visitors about the exhibits either for the purpose of evaluation or as a checklist for design. The following are derived from Allen’s 4 key factors in engagement and learning:
Is the use and purpose of the exhibit quickly self evident?
Does the design allow for unintended use that might distract from the intended use?
Does the exhibit use organising devices common to the rest of the exhibition?
Does the exhibition utilise a familiar activity?
Does the exhibit respond to visitor actions?
Does the interactivity improve the learning outcomes?
Do all aspects of interactivity support the primary learning outcome?
Is the interactivity of the exhibit effective and engaging enough to justify the cost of producing and maintaining it?
Can several people interact with the exhibit simultaneously?
Does the exhibit fit well within the exhibition theme?
Would the exhibition work as well without this exhibit?
Does the underlying theme utilise commonly understood science schemes?
Diversity of learners
Does the exhibit use universal design principles?
Is the exhibit multimodal?
Does the exhibit create connections to personal experience via narrative?
Does the exhibit appear to be addressing one group of people in particular, or excluding a certain group or groups?
The tricky thing for me at this point is working out how we might measure or judge success for each criterion. While Allen does not present methodologies or data from her research in this article, she does make excellent use of examples to help the reader form mental images of how the differences discussed might materialise in exhibits. Furthermore, we can infer from the article that Allen and her team measure or record the following:
Holding time (how long people spend at the exhibit)
Number of completions of the challenge + time taken to complete a challenge (if applicable)
Reading of labels + how long before that happens
Learning conversations (between visitors presumably)
Visitor feedback (including what they enjoyed, what they learned, what they thought it was about etc.)
However, it is not clear how these measured or observed values correspond with each criterion.
Overall, I thought the article was very useful to read at the beginning of the course and it has given me a lot to consider, especially the complexities of development, research and evaluation of science exhibits in an informal learning environment. I would love to hear what you think about this article and my response to it, and am especially keen for comments on the development of a set of evaluation criteria from Allen’s key findings.