Advertisement

Professors Call Q Guide "Worthless" Tool for Assessing Courses

Further, the scientific community is split on the value of these studies due to the difficulty of designing experimental studies in the complex college environment.

“At the college level, there are tons of courses. It is unclear what learning goals are across different types of courses, and there is whole bunch of self-selection going on,” said sociology professor Christopher Winship.

Indeed, students are rating courses on factors that are not necessarily related to what they have learned. A 2006 memo presented to the Task Force on Teaching and Career Development cited studies that identified over a dozen possible sources of bias in student evaluations—including the course’s grading policy, whether it was an elective, its workload, class size, and a category entitled “instructor expressiveness and showmanship.”

In one well-known example, Nalini Ambady and Robert Rosenthal, two psychology researchers, found that the Q Guide scores of Harvard students who have taken a course for a full semester strongly correlate with scores given by a separate group of Harvard students after watching the course instructor lecture for 30 seconds–with the sound off.

“Students may think they’re really answering the question about whether their homeworks were always returned on time, but they’re really just giving their gut feeling on whether they liked the person or not,” Lewis said.

Advertisement

INNOVATING ON ASSESSMENT

Faculty and administrators said that innovation in teaching and learning must also be accompanied by innovation in methods of assessing teaching.

“It will be important for FAS to stay on the cutting-edge of measures of teaching if, indeed, teaching is going to be a basis for decisions about promotions, tenure, and salary,” sociology professor Mary C. Brinton said.

“The mandatory use of the Q scores as evidence that we’re serious about teaching is not credible,” Lewis said.

Many faculty members suggested peer evaluation between professors, a method used at the Harvard Business School and recently adopted by the life sciences division in FAS, as an alternative system.

“You set out the criteria on which they are going to be evaluated. You tell them this. You observe them in the classroom. You give suggestions. You do it again in a year,” Lewis said.

But Jacobsen, who also mentioned peer evaluation as a potential assessment technique, said that faculty members may not be comfortable with collegial feedback on individual pedagogy. “Faculty are very used to peer review in our research, but we’re much less accustomed to peer review in our teaching,” he said.

Faculty members also questioned whether students should evaluate a course immediately upon its completion.

“What is more interesting, perhaps, is to know how students feel several years after the course,” Jacobsen said.

Currently, there is no system in place for retrospective student evaluation. Although annual senior surveys ask members of the graduating class to name their most positive and negative academic experiences, there are no questions about specific classes or professors.

Tags

Recommended Articles

Advertisement