Advertisement

Columns

The AI Threat to Liberal Arts Is More Fundamental Than You Think

{shortcode-10aa71acc5b4a6bda72d78574ecd1abd16c8e650}

There’s an old Soviet anecdote about a nail factory. When instructed to maximize output by number, it churned out millions of tiny pins. When instructed to maximize by weight, it produced a few massive beams. Both outputs hit the metric. Neither produced usable nails. The story captures economist Charles A. E. Goodhart’s law: Once a measure becomes a target, it stops being a good measure.

The same can be said of academic life at Harvard. Professors lament that students treat coursework as a game of maximizing grades with minimal effort — concerns that are further accelerated by AI. But in many industries, this pattern of optimization may be considered an asset. What looks like academic weakness may be professional strength. In this light, Harvard isn’t broken at all. In fact, it may be one of the world’s most effective training camps for metric-hacking.

Consider the participation grade in section. It’s meant to reward thoughtful contributions. In practice, it incentivizes short interjections that check the box for speaking that day without deepening discussion. On the macro-level, the dynamic is everywhere: mining for “gems,” avoiding readings like the plague, cramming for exams, and whisper networks for problem sets.

Unlike in the Soviet Union, though, this kind of metric-driven optimization doesn’t collapse the system. At the Kremlin on the Charles, it may actually work.

Advertisement

Look at the career choices of graduating seniors entering the workforce: In the Class of 2025, roughly 21 percent headed into finance, 18 percent into tech, and 14 percent into consulting, per The Crimson’s senior survey. These industries are more reliant on proxies or performance indicators than sectors like medicine, law, or education. In finance, analysts can spend more time updating complex models than deciphering their inner workings. In consulting, a polished slide deck can stand in for progress. In tech, growth and engagement numbers can matter more than whether a product is genuinely good for people.

This isn’t to say these jobs are unserious; to the contrary, they demand consistent, high-quality output. But in these contexts, maximizing the outcome is often the name of the game, regardless of the process. Reading every page of a book is rarely the most effective way to hit a deadline. Universities, in fact, may be unique among institutions in that their role is to confer understanding. The real world operates according to a different playbook.

Artificial intelligence exacerbates this dynamic. A student can generate an essay or problem set answers with a few keystrokes. The results look polished but often reflect little actual comprehension. Students with ChatGPT in the background can churn out the right outputs without necessarily grasping what they mean. In some jobs, too, this kind of cognitive offloading is sufficient. Producing signals of competence can substitute for competence itself.

Here the contrast with Harvard’s stated mission comes into focus. The College promises to “educate the citizens and citizen-leaders for our society” through “the transformative power of a liberal arts and sciences education.” That transformation, it says, begins with “exposure to new ideas, new ways of understanding, and new ways of knowing.”

There lies today’s paradox: Few tools fit the description of “new way of understanding” as completely as AI, yet few threaten to foreclose the possibility of personal transformation so completely as well. AI may be the greatest system of knowledge to date, but it is so effective that it may actually obviate our need — and undermine our ability — to understand.

Understanding, in the Harvard sense, is the point of coursework: grappling with ambiguity, inhabiting perspectives not your own, and following ideas to their full conclusion rather than skimming them for utility. If optimization trains us to succeed within existing systems, understanding equips us to ask whether those systems make sense at all, and, when they don’t, to imagine better ones.

These descriptions, both of Harvard and certain careers, are surely an oversimplification. But I believe they get at something fundamentally true: The world is becoming a place where understanding is less essential to career success than ever before. Yet universities — perceived as a stepping stool to a good career — are ultimately designed to teach understanding.

Students are already raising the alarm. Some call for banning AI in humanities courses, others for AI-proof exams like in-class essays or oral defenses. Those guardrails have merit, but they skirt the deeper problem: AI is only the latest iteration of a long-running phenomenon. Harvard’s challenge in this new age isn’t just keeping AI out; it’s convincing a generation of manic optimizers why finishing the book or wrestling with the readings matters in the first place.

The centuries-old argument that liberal arts hold the key to eudaimonia — a strikingly conservative instinct for such a liberal institution — may no longer hold under the weight of the present moment. Harvard might have to reexamine the problem of understanding from the ground up and articulate, anew, the purpose of its core practices.

Maybe Harvard is preparing us perfectly. Maybe the world is what’s broken. And maybe, the best way to prepare for the world while at Harvard is not learning how to understand but instead understanding how to optimize.

Isaac R. Mansell ’26, a Crimson Editorial editor, is an Economics concentrator in Kirkland House.

Tags

Advertisement