Small changes can have a big impact
In “Evaluation after Opening”, author Beverly Serrell, explores the topic of remedial and summative evaluations. The rest of the book is mainly about labels, however this chapter can be applied to all aspects of an exhibit. According to Mark Walhimer, remedial and summative evaluations are two of four types of exhibit evaluation.
Front-end evaluation is the audience research that is performed in advance of development and design. It allows you to understand what the audience already knows so you can design the exhibit to suit the audience’s needs. Methods include focus groups, interviews, and surveys.
Formative evaluation assesses how things are going while they’re still in development so you can fix problems before the exhibit is open to the public. This includes methods like prototyping and using consultants or peer feedback.
Remedial evaluation is the main focus of this reading. It’s the use of evaluation while an exhibit is still open to the public so that issues can be identified and remedied in order to maximise the impact of the exhibition. Remedial evaluation uses tools like observations, visitor feedback (solicited or candid), and staff feedback.
Summative evaluation looks at how the exhibition performed once it is all finished, with the intention of carrying any lessons forward to future exhibitions. This can be done via feedback from visitors, staff, peers, and consultants, but also business insights data such as ticket sales or social media engagement statistics.
Serrell opines that remedial evaluation can be particularly useful for improving visitor engagement with exhibits. She even recommends reserving 10-15% of the budget so that labels can be reprinted, banners or panels can be added, photographs swapped out, or so that directional signage can be improved. This can improve traffic flow, comprehension, and/ or retention of key concepts. Using a modular design, where elements can be easily added, subtracted, or changed, can reduce the remedial costs and make it easier to make changes.
After discussing remedial evaluation, Serrell talks about the role of summative evaluation, which is a performance analysis of the entire exhibition. Summative analysis is used to find a big picture lessons that can be used to improve future exhibitions. Then she discusses measurement tools that can be used to this end. As Serrell says, it is important to use multiple methods in order to gain a broader perspective and understand of visitors’ experiences at the museum.
Finally, Serrell lists some examples of other collaborative analyses. To be honest, I didn’t get much out of this section and I think it could have been cut without losing much. Some of the reports sound like good sources for further reading. Others, such as #7, seemed like the author patting herself on the back. The final section is more on-topic, extorting the value of summative evaluation by listing several examples of good exhibits that have been found out this way.
The New Measurement Tools section ironically missed out on some obvious modern techniques and technologies which could aid in evaluating exhibits. Serrell briefly mentioned that discussing visitor photographs in interviews can allow evaluators to see how visitors connect with exhibits, but she failed to mention that social media now allows us to expand the sample size. Evaluators can look through location tagged photos which have been posted publicly on social media sites like facebook and instagram. Or, designers can actively encourage visitors to post photos using a common hashtag. Only photos from users who have decided to post their photos publicly can and should be used, however, the sample will still provide useful information about which exhibits visitors connect with most and how they interacted with them. To give it a go, check out the screenshot above, or open this link in your instagram app and select “Recent” rather than “Top Posts” (top posts are a skewed sample due to factors like celebrity endorsements and the inclusion of wedding photos).
The digital age brings other tools that can be used in evaluative ways. For example, website analytics such as common search terms, click through rates, or heat mapping can be used to figure out what visitors are most interested in learning about. This information could be used in front-end evaluation to plan future exhibits, or in a remedial or summative way to see what was not made clear enough. Social media could also prove a valuable formative evaluative tool. If the museum’s social media account posted photos of their prototypes or behind-the-scenes style content, then they could use the social media engagement statistics to see how the public reacts to the potential exhibits. It seems like most museums view social media as a promotional tool or as a method for creating exhibit interactivity rather than as a tool that could be incorporated into the exhibit evaluation process.
Heat mapping is no longer a tool that tells us about people’s behaviour online. New computer programs can analyse security camera footage in order to create a heat map of people’s movements in a physical space. This could allow museum operators to easily see how people move through an exhibit, saving staff time and energy from having to follow people around. Museum staff should keep up to date with new technology as it can make old processes easier and more efficient, and potentially add new insights that previously could only be dreamt of.
Qualitative data analysis is not without its issues. What other pitfalls should we be wary of? How can savvy evaluators control for these issues so that developers can be sure they are acting on accurate information?
What do you think of the suggestion that visitors are followed and their conversations recorded? Ethically speaking, do you think this violates their right to privacy? Would digital techniques alleviate or exacerbate any concerns?
Can you think of any other ways that new technologies could be employed in service of exhibition evaluation, remedial or otherwise?