Advertisement

Columns

AI Defeats the Purpose of a Humanities Education

{shortcode-dcd882ee53018a9c8e63081a63b6b9cedf49909e}

It’s time to ban generative Artificial Intelligence in humanities courses at Harvard.

We are not against the use of all digital tools, of course. Spellcheck keeps our professors (and Crimson editors) from spending precious time wading through typo-ridden prose. Thesauruses help us express ourselves more precisely.

But AI is essentially different. At no stage do traditional tools, whether analog or digital, “think” for us. There is a world of difference between catching a “they’re”/”there” mixup and rewording our sentences for clarity, suggesting counterexamples to our arguments, or restructuring our papers.

While the two of us disagree on whether there are any valid uses of AI in the research process, using AI to “improve” one’s writing or “read” one’s readings fundamentally misunderstands what a humanities education is about.

Advertisement

Coursework is not meant to be perfectly polished writing. Rather, it’s about learning how to express oneself and engaging in a fundamentally human activity. The worst essay written without the help of generative AI serves this purpose far better than one touched by ChatGPT.

In the courses we have taken, the policies on appropriate uses of artificial intelligence vary widely. Some professors allow use of AI in most cases (with a citation) short of writing a paper , some forbid it entirely. Other professors strike a middle ground, distinguishing between acceptable and unacceptable uses of these tools. For instance, it might be permitted to use them to point out flaws in one’s argument or potential objections but impermissible to use them to formulate one’s original argument.

Even these seemingly benign uses, however, miss something essential about a humanities education: The point is learning how to write, to read, and to think, even imperfectly. In days past, suggestions to improve one’s writing came from intellectual conversations with peers, and students had to learn to critique their own writing without the aid of a 24/7 personal assistant.

If we permit the use of generative AI in humanities classes, then students are no longer encouraged to develop their capacity for intellectual conversation with others, nor do they practice how to think creatively without the heavy-handed guidance of artificial intelligence.

Moreover, the humanities loses its crucial human element (it is in the name, after all) with any use of generative AI. Central to the goals of philosophy, literature, or any other humanities field is to understand and communicate the human condition. An AI chatbot lacks the subjective understanding of what it is to be human that motivates the humanities to begin with.

Sure, it may save time to, say, edit using AI. However, visiting a professor during office hours or collaborating with a friend to polish an assignment keeps every last word and twist of phrase rooted in human experience — the very experience the humanities seek to explore. There is a deep disconnect between dedicating one’s studies to the humanities and finding the assistance of other humans a waste of time.

Some suggest that banning AI won’t make a difference, since there will always be students who use it regardless. But lowering standards of rigor for everyone helps no one. We do not permit cheating simply because we know many students will cheat; neither should we encourage the use of generative AI because many will be tempted to use it. Even if a student uses AI to achieve a higher grade than deserved, they have lost the experience that makes engaging in humanistic inquiry fruitful.

As AI becomes ever more ubiquitous, the humanities must resist acquiescence with more determination than ever. There is no need for policies that allow the use of generative AI. The use of these tools serves no worthwhile academic end.

While Harvard apparently worries that its educational programming is losing rigor to grade inflation and lax attendance norms, it can start making a difference by curtailing a problem in part of its own making: ban AI use, and the quality of humanities education at Harvard will improve.

The future of the humanities may be in flux, but one thing is for certain: It will always be a fundamentally human endeavor.

Adam N. Chiocco ’27, a Crimson Editorial editor, is a Philosophy concentrator in Pforzheimer House. Allison P. Farrell ’26, a Crimson Editorial editor, is a Philosophy concentrator in Leverett House.

Tags

Advertisement