The College of Charleston has recently moved to a paperless, online-only course instruction evaluation system. The obvious benefit of the new system is that instructors are not required to use class time for student evaluations, and no students are required to shuffle sealed envelopes from one building to another once the evaluations are complete. I’m a big proponent of technology-enhanced learning and while I appreciate the time (and environmental) savings of the new system, I find myself frustrated with it. One problem is that every semester, there are problems with a very low response rate. Any of our Math 104 (“Elementary Statistics”) students can tell you about the issues with a voluntary response sample.
But the low response rate isn’t my main problem with the evaluations. In an ideal world, the course evaluations would provide statistically meaningful data that is useful in helping me guide course design, structure, and content. Unfortunately, the evaluations don’t do this. For example, one question asks students to rate (using a Likert scale) the statement, “The instructor showed enthusiasm for teaching the subject.” Yes, I am enthusiastic in my classroom (both about teaching and about mathematics), and I am happy that my students notice and enjoy my enthusiasm. But this doesn’t help me teach the course better. I would prefer student feedback on statements like, “In this course I learned to work cooperatively with my peers to learn mathematical concepts.”
Overall, my issue with the evaluations is that the questions posed are teacher-centered instead of learner-centered. Example: Rate the statement “Overall this instructor is an effective teacher.” This statement removes the student’s responsibility for their own learning. Compare with the following: Rate the statement “Overall in this course I developed skills as an effective learner.” The biggest goal I have in a mathematics course is to provide students with problem solving skills that they can use beyond my classroom. If a professor often gives a fantastic lecture, then that’s great; but that may not be helpful to students five years from now. Instead I hope to give students skills, practice, and experience in critical thinking, problem solving, complex reasoning, etc. Rating whether or not they’ve learned these skills is more important than rating “Overall, the required textbook was useful.”
Of course, figuring out how students have grown academically or intellectually is difficult. In this semester’s Precalculus classes, I’m working together with another instructor on designing course content. One of the things we decided to do was to use something similar to the Student Assessment of their Learning Gains (SALG) tool in an attempt to gather data on student progress through the course. Initially, the students are asked to take a benchmark SALG survey and they will repeat a similar survey two to three times throughout this semester. We are hoping to gather meaningful data on the growth of their skills by tracking things like whether they are in the habit of “using systematic reasoning in the approach to problems” or “using a critical approach to analyze arguments in daily life.” Hopefully this data will prove useful as we continue to tweak the course moving forward.
I agree with everything here. I would add another annoyance surrounding student evaluations: the over-reliance of student evaluations in determining tenure and promotion. I think that some people are good at getting good evaluations without being good at teaching, and some people probably help the students learn a lot but are not well-liked.
I don’t know what this would be, but I would imagine it would involve regular visits to classes (weekly?), review of the course policies, and a bunch of other stuff that I cannot think of. Of course, this is expensive to do, so it will probably never happen.
Re: the over-reliance of student evaluations in determining tenure and promotion
My status as “Visiting Assistant Professor” extends until May 15th, 2015. (I’m currently on a 3-year contract.) Among other things, I am ineligible for tenure and promotion. I do wonder about how much my teaching evaluations will affect decisions about renewing my contract (if that’s even possible politically/fiscally). So, at least in my case, it’s not entirely clear if and how my teaching evaluations matter beyond my goals of improvement of teaching.
I too was underwhelmed by the responses to the traditional “Did the instructor come to class prepared?” questionnaire. I didn’t really find anything that would improve the course / the learning / my instruction. I looked at it and thought, “meh.”
Thankfully, I had also implemented a SLAG-like survey in class at both the mid-point and end of the semester. These responses were much richer, honest, and helped make some positive changes to the course. It provided insight into the students’ learning, which to my mind, is the whole point.