Advertisement

Letters

Letters

Flawed Methodology on Living Wage Poll

To the editors:

According to “Weekend Survey Shows Lack of Support for Sit-In,” (News, April 30), student support for a living wage for all Harvard workers “has dropped significantly in the last year.” Based on the details in the story, I draw some different conclusions. First, the survey method makes it impossible to know what students believed about a living wage last Sunday. And second, we cannot conclude that levels of support have changed.

Advertisement

Of the five attitude questions in the poll, one—whether students would be willing to pay higher tuition if that were necessary to pay for a living wage for all workers—was not kosher. For one thing, the implications of the question are vague. What if it had asked if students would be willing to pay an extra $5 a year in tuition? An extra $20? Moreover, I’ve seen no claim from Harvard administrators that they would pay for a living wage through a tuition increase. Polls that contain a political message (e.g., a living wage might cost you more) have been termed “push polls” and their use in last fall’s campaigns to transmit falsehoods about candidates has been decried. Legitimate surveys avoid biased and vague items like the plague. Such questions not only produce meaningless results, they bias the answers to the questions that follow them by shaping the meaning respondents attach to them. And in e-mail surveys, respondents can see all the questions before answering any of them, meaning answers to preceding questions can be biased.

Even if students’ responses to other questions were not skewed, it is still impossible to draw any inferences about change in support for a living wage since The Crimson’s last poll in January 2000.

There are four reasons why the percentage of students who support a living wage for all Harvard workers might differ across the two surveys: (1) by chance alone the first survey included more pro-living wage students than the second survey, (2) the two surveys differed in question wording, question order, sample selection or survey administration (e-mail and telephone) (3) the attitudes of all Harvard students changed over the last 15 months, the explanation The Crimson prefers, and (4) the two samples did not come from the same populations.

The article does not provide enough information for readers to assess the possibility that the difference was due to random sampling because it does not tell us how many people responded to the items being compared. Nor does it tell us whether the comparison is based on identical surveys, administered through identical designs. Thus, we cannot tell whether the differences reflect question wording or question ordering or survey administration effects or real change.

Given the timing of the two surveys, it is quite likely that the populations from which the surveys were drawn differ, the fourth possible explanation. Last Sunday’s survey population comprised students who were accessible by e-mail or telephone between midnight and 8 p.m. Anyone gone for the weekend or camping out in the Yard, for that matter, was not part of that population. The January 2000 poll, in contrast, surveyed people on campus immediately before or during finals. The two populations surveyed may also differ because of “nonresponse bias.” In last week’s survey, just 62 percent of the random sample replied. Before generalizing their responses to Harvard students, we need to know whether the 38 percent who did not answer differ systematically from the respondents. Survey research routinely compares the respondents with nonrespondents on known attributes (concentration, sex, economic background, national origin) that allow inferences about the extent of nonresponse bias.

Tags

Recommended Articles

Advertisement