{shortcode-86174b6baaa8930df32fbe075112e10756445bda}
A recent Crimson survey of the Faculty of Arts and Sciences revealed a troubling concern among professors: two-thirds of respondents believe their students “do not sufficiently prioritize their coursework.”
Some point to pre-professional pressures or technological distractions. Others blame grade inflation. But a new factor is likely exacerbating the problem: artificial intelligence.
Over the past few years, there has been a surge in AI use among students. The same survey revealed that nearly 80 percent of professors know or suspect that they have received work that was made using AI.
Faculty say students are disengaged — and they’re not wrong. One reason is that AI has changed not only how we learn, but also how we think about learning itself. The way many of us use AI — as a solution manual rather than a learning assistant — is making that problem worse. A 2024 survey of Harvard undergraduates found that roughly 25 percent of students who reported using generative AI have begun to rely on it as a substitute for office hours and required readings. Students are wrestling less and less with challenging material.
That said, Harvard professors are no longer pretending that AI doesn’t exist. This semester, I’ve noticed a dramatic shift in how faculty approach AI. As a junior, I can say with confidence that this year’s course policies around AI do not resemble those I encountered in my first two years.
Gone are the blanket bans, the moral panic about the effect on learning, the vague threats of disciplinary action. Now, professors are taking a more nuanced approach in their syllabi. They are acknowledging AI, setting clearer guidelines for its use, and designing assignments that preempt it. One of my professors mentioned that some classes have moved away from take-home essays, opting instead for in-person exams or oral presentations.
Others treat AI the way professors once treated Google: useful for gathering or summarizing background information, but not as a replacement for thinking or writing. I’ve seen syllabi that explicitly outline where AI use is allowed and where it crosses the line.
These changes are, in my view, encouraging. Faculty are showing flexibility and realism. They’re acknowledging the tools students actually use and setting us up to use them responsibly and enhance learning.
But while professors are adapting, many students do not use that flexibility responsibly. Too often, AI is still being used as a shortcut, not a supplement. It’s a way to get through a p-set faster, finish a discussion post in 30 seconds, and skim the readings without actually opening the book. Shifts in AI policies require that students re-engage with learning, not just efficiency.
That’s the real problem.
If students want to be treated as adults who can responsibly use powerful tools, we have to act like it. That means developing better habits, approaching our work with more honesty, and recommitting to curiosity, even when the assignment is tedious or the topic isn’t the most exciting.
Between clubs, jobs, and internship applications, it’s easy to see coursework as just another box to check. In part because of Harvard’s culture of efficiency and perfection, AI has become a lifeline for getting everything done.
There are ways professors can support this shift and encourage thoughtful learning. They can assign projects where students use AI with classmates for collaboration, not solo short-cutting. For example, students could be tasked with using AI in small groups to brainstorm ideas or evaluate or critique AI-generated content. AI is less useful with long-form, collaborative work like group papers and media projects, which require real organization and coordination.
Another viable strategy could involve increasing the use of speech-based assessments, like oral exams, debates, or presentations, which test facility with material and understanding over a polished final product. These kinds of assessments encourage deeper engagement and force students to articulate their thinking clearly and authentically.
To build habits of self-awareness and encourage responsible use and transparency, professors can also require short “AI use statements” where students disclose if and how they used AI tools. This can help them reflect on the boundary between assistance and substitution.
Institutions and departments should treat AI like calculators in math: It does not need to be banned by default, but its use depends on the skill being built and assessed.
To help facilitate this transition, in addition to current offerings of sample policies and general guidance, Harvard should further develop a more specific, centralized, evolving framework that departments can adapt to their specific needs. This framework would promote greater consistency in AI policies across courses while still allowing for flexibility.
But ultimately, the responsibility lies with students. The more professors show trust and adaptability, the more we need to earn it.
Nonetheless, we should be optimistic about the direction AI use at the University is heading. Harvard, like every other institution, is figuring out what learning looks like in the AI age. It’s messy, imperfect, and evolving quickly. But if this semester is any indication, faculty are meeting us halfway.
Now it’s on us to meet them there.
Catherine E.F. Previn ’27, an Associate Editorial editor, is a Government concentrator in Cabot House.
Read more in Opinion
Harvard Should Invest in Vocational Education