But, as always seems to happen, the same teaching technique inspired completely contradictory reactions.
For example, while I don’t show too many PowerPoints in English composition, every time I introduce a new writing strategy I’ll throw up a few slides with definitions and examples. One student praised the slides as the most helpful tool for her learning — another called them useless and recommended I eliminate them from the course.
Contradictory statements about my methods I can handle; I’ve seen them all before. But this round of reviews also included a comment from a student who said I didn’t “connect well” with the class. This was a community-engaged learning course in which we took a field trip together to a homeless shelter, spent lots of time in group activities in class, and shared personal perspectives on our understanding of poverty. I also followed my own advice and made a point to arrive in the classroom a few minutes early and engage in informal conversations with students.
So that student’s comment — although the only one of its kind in this crop of evaluations — is very likely to spend the entire summer sticking in my craw.
I’m also a white male in his late 40s, which means that I am usually spared pointed comments about my wardrobe, my voice, or my persona that routinely pepper the course evaluations of female and nonwhite faculty members, as plenty of research has documented.
Over the years, a number of factors — contradictory criticisms, bias against vulnerable instructors, inconsistent response rates — have all been adduced as evidence for why academe should reduce the outsized role that student ratings and comments play in the evaluation of teaching.
To illustrate that point, she walked us through a thought-provoking exercise that demonstrated in sharp terms why student evaluations and peer observations should be considered within the context of a host of other measures. “I want you to make a list,” Miller said, “of all of the different things that you do each week in support of your teaching. Don’t just think about being in class. Think of all of the other activities you do each week that relate to your teaching.”
Here is what I jotted down:
- Read for class (for the composition course I just taught, I had to read four assigned books and some additional online essays).
- Prepare my lesson plan.
- Arrange a class visit to the homeless shelter.
- Do background research on the subject we were discussing.
- Grade writing exercises.
- Create assignment sheets.
- Meet with students.
- Grade papers.
- Respond to their emails.
As the list slowly grew under my pen, the point of the exercise became abundantly clear: Much of the work of teaching — perhaps most of it — takes place outside of the classroom.
Much of the work that we put into our teaching cannot be evaluated, or even accessed, via the two most common strategies that institutions use to evaluate our teaching effectiveness of their faculty: student evaluations and peer observations.
Part of the process ought to include training people in how to assess teaching fairly, or we risk basing promotion decisions on the classroom preferences or gut instincts of the evaluators.
It takes time to evaluate teaching well — and time usually requires financial investment. Those are significant obstacles, and they won’t be overcome unless academe is willing to set aside its reliance on easy but dubious methods and take the evaluation of good teaching seriously.