AAM Rundown: Effectively Using Evaluation Data

Minneapolis Convention Center

The Minneapolis Convention Center, where I spent four days among amazing museum professionals

Hello my lovelies! I’m finally back in Nashville and safely entrenched in front of my computer, so I can start writing up everything that happened. In the next few days I’ll be loading my interviews (some formal, and some informal), the take-aways from the various sessions and workshops, and a couple impressions of Minneapolis museums.

Somewhat inadvertently, it seems I created my own themed version of the conference. “Creative Community” was AAM’s title for this year’s meeting, but looking back on it, I really focused on prototyping, evaluation and problem-solving in a museum setting. The ultimate goal is to create communities of people who enjoy, support, and are passionate about your museum, but that’s always a work in progress, one that involves not only iterative testing of exhibitions and evaluation of processes, but an overall philosophy that the museum as a whole is a prototype, and can adapt to the visitors’ needs.

So, without any further ado, I bring you my notes from the very first session I attended:

“All About Audience: Effectively Using Evaluation Data”

Speakers:

  • Leah Melber (Director of Student & Teacher Programs, Lincoln Park Zoo)
  • Nancy Plaskett (Associate Vice President of Community, Student, and Educator Programs, Chicago Children’s Museum)
  • Marley Steele-Inama (Education Research and Evaluation Manager, Denver Zoo)
  • Laureen E. Trainer (Manager of Visitor Research, Denver Museum of Nature & Science)

I did my internship in evaluations at the Nashville Zoo, so I’m fairly well versed in the terminology and philosophies, but I realize that’s not the case with everyone. Here are some quick definitions to get you up to speed.

Evaluation uses inquiry and judgment methods to determine the worth, merit, utility, effectiveness and significance of a program, exhibition feature, or really just about anything. It’s most often used to assist decision makers (“how do I know this is the thing I should be doing?”), but it also serves a political function in that you can guide institutional change when you have data on your side (“how do I make my institution embrace x, or prevent them from going down path z?”). Research and evaluation often use the same tools, such as interviews or observational studies, which is pretty convenient considering that most visitor studies people try to hit a sweet spot between the two. You want the timeliness and applicability of an evaluation to improve something, but you also want the explorative nature of research, and most importantly you want to share the results.

All caught up? Ok. The speakers had some illuminating things to say about their own evaluation experiences. For example, with one simple study based on 8 visitors, Leah Melber managed to forge a whole new position for herself in her institution. See, evaluations aren’t just good for your institution, they’re good for you too!

Another example of what a study can accomplish came from Marley Steele-Inama at the Denver Zoo. She had a much larger budget to work with (money for evaluation was built in to the $50 million budget for the new Elephant Passage exhibition), her team was able to brainstorm and then select among various data collection methods, and they were ultimately able to talk to many different groups of visitors and stakeholders. The result was something called “The Tablet”, which laid out appropriate learning outcomes for various ages. The Tablet didn’t just sit on a shelf in the education department though —everyone from animal trainers to the marketing staff to the conservation outreach staff in Asia relied on the information. Share what you learn with others; they may never have realized just how much they needed your insights.

Laureen at the Denver Museum of Science had a great story about evaluating not just a particular program, but the process of its implementation. This process evaluation will help you figure out how to replicate a successful initiative and avoid pitfalls. Having documentation (including staff perspectives on process) will be a huge timesaver for future projects, even if you’re essentially highlighting negative outcomes.

It was all great info, and great use of evaluations, but unfortunately their examples may not be applicable for every institution, mostly because they each had a lot of funding dedicated to their evaluation projects. Often, museums struggle with even fitting evaluations into the project (and often just tack on summative evaluations with no plan of acting on the results), let alone funding them. Maybe this should be a wakeup call, that evaluations matter, and we should dedicate more funding and staff support to these undertakings.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation