In part two of my three posts on this year’s Satellite teen program, I’m sharing the unexpected data that helped me see the bigger picture about my students’ ability to reflect thanks to being in the program.
At the end of each session, teens used a web app on their iPads called Infuse Learning to fill out a quick exit slip survey. Exit slips are an easy way to take the pulse of your students at the end of a session. For Satellite, they answered the questions “What is something you learned today?” and “What are you still wondering about?” Though different from our interview questions, these certainly also support reflective practice by thinking back on the day’s session.
As the year went on, I noticed that the teens’ responses were growing more sophisticated: they were longer, they used more art vocabulary, and they realized that they might not be able to answer questions definitively, if at all. At the suggestion of Marianna Adams, who specializes in museum research and evaluation, I tried running these responses through two readability tests to see if that would quantify the sophistication of these responses. One test produces the sample’s Fog Scale Level, which measures syllable count and sentence length (a score of 5 being readable, 20 being very difficult). The other was for the Flesch-Kincaid Grade Level, which approximates the average grade level necessary to read and understand the text.
For the first question (“What is something you learned today?”), students’ scores jumped considerably in Fog Scale and Reading Level. Since these tests measure syllable count, sentence length, and grade level, this corroborates with what I found in the core evaluation.
But I was surprised to see that when I tested responses to the second question (“What are you still wondering about?”), students’ scores actually dropped! Yet if you read their responses, there is a drastic change–for the better.
Take Student D’s responses. In his early answer, he asks a relatively basic art historical question about distinguishing one type of art from another. In his later response, he is thinking deeply about the purpose of art and how we even decide what art is. And while Student F uses high-level art history vocabulary in her first response, it’s without context; later on, she’s thinking about how two seemingly opposite concepts may have something in common after all.
The scores of these comments may have decreased, but I’d argue that their reflective quality increased—the teens ask big questions that might not have an answer; they ditch high-level vocabulary to more informally muse on philosophical questions of art, destruction, and race. Running these responses through the tests helped me see, again, that while tools can be helpful, they’re ultimately just one tool—we need more than one to paint a bigger picture.
To round out that image, I’ll share one final unexpected evaluation tool: the teens’ final project videos as well as a talkback session they conducted at their video premiere.
For their final project, each student chose one work of art in the Museum Collection and looked at it, researched it, and talked about it with others for seven months. (Given that most visitors spend under 10 seconds looking at art in museum galleries, this is a feat in and of itself!) They distilled a school year’s worth of thinking into brief, 2-4 minute videos that answered what the work meant to them, what it had meant to others, and how their own thinking had changed as a result of looking at the piece—all questions with, of course, that familiar reflective bent.
The teens also participated in a talk-back/Q&A at the celebration where we premiered these final projects. Guests—museum staff, teachers, family, and friends—asked the group questions about their experience. If you like, you can watch the teens’ videos, along with the Q&A, in the YouTube playlist below.
Continued in Reflective Evaluation: How Can Museums Change Teens–and Vice Versa? Part 3, coming next week. Adapted from an essay originally posted on ArtMuseumTeaching.com.