Last week, I experimented with administering my anonymous informal evaluations on-line. I ran it pretty much as described here, with one exception: I did not hold them accountable (by requiring them to self-report their completion) for returning an evaluation.
I deem this semester’s experiment a failure. Even though I received good quality feedback, the rate of return was too low for my taste. But, several pleasant surprises and good experiences with this format make me want to try it again next semester with some tweaks.
For my first-semester freshmen, I received a disappointing 20 out of 36 potential evaluations (56%). (here’s the spreadsheet) The quality of feedback on those 20 evals was very good. And the anonymity of students was solid. Since I collect a lot of handwritten work in this type of class (their transcriptions), I have a good sense of their handwriting, and I often know which “anonymous” evaluation goes with which person. In fact, if I tried hard, I’m sure I could figure out all of the handwritten evaluations every semester. Luckily, I’ve never wanted to do that, and even when I recognize a distinctive handwriting, I don’t feel like it presented challenges to my relationship with that student.
For my upper-division class (juniors and seniors), I received 9 out of 11 (81%). I’m O.K. with the upper division class returns because my gut says that the quality of feedback received overall is higher.
Two things (one pedagogical, one practical) make me want to try again:
(1) I think the electronic evaluations resulted a better quality of feedback than I received in previous semesters’ paper evaluations. My sense is that students type faster than they write, and that many students communicate better in front of a keyboard/screen than with a pencil/paper. To really understand this, I need to go back through previous semesters’ returns and evaluate the feedback. I think I will track word count, # of questions answered, and quality of reflection (that should be an interesting rubric to figure out…). I am particularly interested in the # of low-quality paper evaluations compared with the # of absent electronic evaluations. There are always a few paper evaluations that have monosyllabic answers, closing with a cheery “good job” to me. And there are always a few missing paper evaluations where a student just needed to leave early or didn’t show to class that day.
(2) The spreadsheet that the evaluation forms populated is laid out in a way that facilitates my reflection because I can easily read all the answers to one question–I just read down a column. (Here’s the same link I provided above.) This layout facilitates my reflection because I don’t have to flip through papers reading the same question on every paper. Invariably, I get sidetracked with those paper evaluations–reading the answers to other questions, looking at their handwriting, spending time trying to flip just one piece of paper, etc… I also use color to help me organize my thoughts. On paper evals, I use a highlighter, but with the googledoc, I have lots more options.
Ideas for Next Time:
- I need to create a situation where students encounter the evaluation when they’re already in “student” mode and online. This semester, I sent an email that had the form embedded in it. Pretty Easy. But, if they saw it come through while in transit between classes or eating dinner, I’m sure they put it off. Perhaps if I already have them doing an activity online (such as informal writing in their googledoc or learning a music-by-ear assignment), they will take the time to do it.
- If I plan ahead, I can add a bullet point to their assignment for the day asking them to complete the evaluation. Again, I won’t hold them accountable for it, but most are really good about going down the to-do list for class preparation and would be likely to complete the evaluation.
- Do you have thoughts on how to better set this up?