As Harvard continues to stand up for its race-conscious admissions policies against a lawsuit widely expected to reach the Supreme Court, it is instructive to examine how affirmative action became one of the most contentious social and political issues in higher education. This column is dedicated to examining the shifting narratives surrounding affirmative action: the changing arguments in favor of it, the arguments against it, and where it fits into the American cultural and racial landscape today.
The passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965 represented a watershed moment in the long and arduous struggle for racial equality in the United States. The Voting Rights Act secured Black Americans the right to vote, enforcing the Fourteenth and Fifteenth Amendments. The Civil Rights Act — specifically Title VII, which prohibited employment discrimination based on race, religion, sex, and national religion — secured for them the right to take up jobs without fear of discrimination. Having achieved, at least on paper, a notion of racial equality, the focus of the civil rights movement turned to securing economic equality for African Americans: equality not just of status, but of opportunity. At the core of that continuing struggle stands affirmative action.
Most people, regardless of political, racial, or social affiliation, would not contest two basic facts about race relations in the United States: first, that the country has a prolonged history of oppression and discrimination against African Americans, and that this constitutes a grievous harm; second, that despite the equal legal status of African Americans, racial discrimination and inequality persist to this day. And if you accept that a historic wrong continues to be prevalent today, presumably you also accept that we (society) must do something to correct it.
In this formulation, affirmative action — a fancy name for doing something to account for a wrong — seems almost uncontroversial. It is only when we move past the need to do something and start to consider what that entails that a divergence emerges.
Scholars have long debated how a just affirmative action program should treat historically disadvantaged groups. Some advocate expanding opportunity for such groups while avoiding explicit distinctions along racial lines. Others emphasize the need for preferential policies in order to counter decades of systemic racism and ensure adequate representation.
Of those two schools of thought, the first traces its roots back to the original conception of the term “affirmative action.” In 1961, President John F. Kennedy ’40 signed Executive Order 10925, wherein the phrase was first mentioned. In the Kennedy administration’s own words, the idea was to ensure that “Americans of all colors and beliefs will have equal access to employment within the government, and with those who do business with the government.” In other words, employers should not treat any applicant unfairly on account of race; nor should any employee be subjected to racial discrimination.
In his book The Affirmative Action Puzzle, Melvin Urofsky describes this as “soft” affirmative action: a vaguely defined policy that spoke of “equal access” (best understood as encouraging all races to apply for jobs). By mandating equal treatment of applicants and employees, this approach requires nothing more than banishing race as a factor in the hiring and training of workers. At the crux of this version of affirmative action lies the belief that the law should be “colorblind,” in line with Kennedy’s proclamation in a televised speech in 1963 that “race has no place in American life or law.” Doing something to prevent discrimination, then, simply involves enforcement of existing anti-discrimination laws, such as the Civil Rights Act, and, at best, increased efforts to attract applicants from traditionally underrepresented groups.
The second school of thought traces its roots to the aftermath of the civil rights movement that culminated with the passage of the Civil Rights Act and Voting Rights Act. Throughout the movement, activists had subscribed to the colorblind theory, believing that the elimination of discriminatory laws and subsequent attainment of equal legal status would be a catalyst for greater economic equality. Instead, although the civil rights legislation of the 1960s did much to improve the economic condition of Black families, not everyone benefited equally. As Urofsky describes it, “they [civil rights leaders] ignored a fact that had long been known: a merit-based economy guarantees that many people — those least qualified educationally or occupationally — will be stuck at the bottom.”
Black Americans’ overrepresentation among America’s impoverished population, especially when compared to white Americans, persisted after the passage of “colorblind” anti-discrimination laws. These practical inadequacies of the soft, colorblind notion of affirmative action led to demands for what Urofsky calls “hard” affirmative action policies — programs that involved giving preferential treatment to disadvantaged groups in order to compensate for past discrimination and ensure a “critical mass” of people from minority groups.
As we will explore in following weeks, the divergence among these two approaches — should members of a race be accorded preferential treatment to account for past discrimination? — forms the crux of modern affirmative action jurisprudence. Six decades on from Kennedy’s executive order, as the country once again struggles with a racial reckoning, affirmative action has grown ever more controversial and divisive. No lasting settlement on the issue is in sight.
Shreyvardhan Sharma ’22, a Crimson Editorial editor, is a Computer Science concentrator in Eliot House. His column appears on alternate Mondays.
Have a suggestion, question, or concern for The Crimson Editorial Board? Click here.
Read more in Opinion
We Can’t Save the World by Screaming at Our Hairdressers