Sept 2016 Advocate: Studies find that student evaluations are unreliable

FACULTY EVALUATION

Studies find that student evaluations are unreliable and biased measures of faculty performance

By Eric Brenner, Advocate Editor

In the current contract negotiations with our Union, the District has proposed significant changes to the faculty evaluation procedures, including student evaluations in every class every semester. Numerous recent studies of student evaluations of faculty, however, have raised serious questions about the accuracy, reliability and usefulness of student evaluations of faculty. Some examples of these studies are described below.

An article in the January 11, 2016 issue of Inside Higher Ed, reported that, “There’s mounting evidence suggesting that student evaluations of teaching are unreliable. But are these evaluation so bad that they’re actually better at gauging students’ gender bias and grade expectations than they are at measuring teaching effectiveness? A new paper argues that’s the case, and that evaluations are biased against female instructors in particular in so many ways that adjusting them for that bias is impossible.” The paper, titled “Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness,” was published in the January 7, 2016 issue of ScienceOpen Research.

A research study titled “Evaluating students’ evaluations of professors” in the August 2014 Economics of Education Review, compared the student evaluations of a particular professor to how well those students performed in a subsequent course. The study found that professors whose students got higher grades in later classes (an objective measure of effective teaching based on student outcomes) received lower ratings from their students. An author of the study concluded that, “If you make your students do well in their academic career, you get worse evaluations from your students.”

A 2010 literature review in “The Relationship Between Student Evaluations of Teaching and Faculty Evaluations” in the Journal of Education for Business, cited “much evidence suggesting that SE [student evaluation] ratings are influenced by extraneous factors that are not a valid indication of teaching performance… The authors provided a comprehensive list of references and noted that SEs can be influenced by student characteristics (e.g., motivation for taking a course, disposition toward instructor and courses, expected course grade, etc.), instructor characteristics (e.g., gender, rank, experience, personality traits, etc.), course characterisitcs (e.g., class size, grading leniency, course difficulty, etc.), and other environmental characteristics (e.g., physical attributes and the ambience of the classroom).”

A January 2014 annotated bibliography from Auraria Library of the University of Colorado, Denver, listed eight different studies finding “Bias in Student Evaluations of Minority Faculty.” One of the studies described was “Are Student Teaching Evaluations Holding Back Women and Minorities?  The Perils of ‘Doing’ Gender and Race in the Classroom” (Chapter 12 of Presumed Incompetent: The Intersections of Race and Class for Women in Academia, from University Press of Colorado; Utah State University Press, 2011.) Some of the conclusions of this study were: “Do evaluations less often but more deeply. Get students to think, not react intuitively… Think of teaching as on ongoing process not an end product…If decision makers do not take the time or care to fully understand the candidate’s teaching file, including evaluations, and permit important personnel decisions to proceed on the basis of potentially misleading or biased data, then they ethically fail the professoriate, students, and the institution.”