{shortcode-e9233c9d6859af750741a25504270fd40b2d0dc9}
{shortcode-8c0dd475ea3269f67b1a4d37d27db5cc232a1fc2}hen students traipsed back into Harvard’s lecture halls this fall, some encountered a newly analog campus: seated exams, no-laptop policies during class, and assignments handed in on paper.
The changes were the accumulated result of three years that have tested academic integrity policies at Harvard and nationwide, ever since ChatGPT opened up a Pandora’s box of summarized class readings, churned-out code, and on-demand term papers.
And if there was any doubt that artificial intelligence is on the mind of Harvard’s faculty and administrators, David J. Deming — the College’s new dean — greeted freshmen at Convocation by urging them to prepare for a world revolutionized by generative AI.
“Young, educated people like you are already the heaviest users of AI, and you are creative and open-minded enough to figure out the best ways to use it, too,” Deming said.
There’s no question that AI has become ubiquitous in Harvard classrooms. Nearly 80 percent of respondents to The Crimson’s annual Faculty of Arts and Sciences survey in spring 2025 said they had seen coursework they knew or believed was produced with AI — a sharp rise from two years before, when more than half of respondents said they hadn’t received AI-generated work.
But it can be hard for faculty to spot AI-made work; just 14 percent of respondents to the FAS survey said they were “very confident” in their ability to distinguish between AI and non-AI submissions. A Pennsylvania State University study found that humans can only identify AI-generated text roughly 53 percent of the time — barely better than flipping a coin.
AI detection programs can be unreliable. And as large language models become increasingly powerful, students have become open about how constantly they use them.
Now, three years after ChatGPT was launched, Harvard professors are reimagining how they teach. Some are encouraging students to embrace AI to crunch data, translate primary sources, and brush up on course material before exams. Others are trying to AI-proof their classes with in-person exams and assignments.
Whatever their approach to AI, many Harvard instructors say there’s no going back to how they used to teach.
“It doesn’t make sense to prohibit AI and then assign take home essays,” Dean of Undergraduate Education Amanda Claybaugh wrote in an email.
Claybaugh, who has used her position to lead a push to make Harvard’s undergraduate curriculum more rigorous, wrote that it would now fall to faculty to prepare students for a future shaped by AI.
“AI is a powerful tool in the hands of someone who knows how to evaluate its work — and that means, someone who knows how to do that work themselves,” she wrote. “We need to make sure that students are learning that.”
No Blanket Policy
“There have always been shortcuts,” then-College Dean Rakesh Khurana told The Crimson in December 2022, less than a month after ChatGPT was released.
Khurana said it would be up to students to decide whether to use the new technology as a shortcut — or to keep learning the hard way. As for penalties, he said, “we leave decisions around pedagogy and assignments and evaluation up to the faculty.”
But behind the scenes, administrators were puzzling out a strategy for Harvard to adapt to an invention with unpredictable impacts and ever-expanding applications.
In 2023, FAS Dean Hopi E. Hoekstra — still in her first semester in the role — turned to Physics and Astronomy professor Christopher W. Stubbs as an adviser on artificial intelligence. According to Stubbs, the FAS has approached generative AI as “an experiment in progress.”
“The fact that it is moving so fast imposes on the faculty the need to stay current and stay up to date,” Stubbs said, “and if it is changing too fast, I think that, again, argues against trying to issue a blanket policy.”
Though College administrators have said submitting AI-generated work without attribution violates its Honor Code, it gives instructors flexibility on how to treat AI use in their own classrooms. The University’s initial guidelines on AI, issued in summer 2023, also nod to academic integrity concerns but provide no specifics on what to do when students use AI for their coursework.
A set of FAS guidelines address classroom AI use more directly, providing three draft policies with varying levels of tolerance: a “maximally restrictive” policy, a “fully-encouraging” policy, and a policy somewhere in between.
Since they were introduced in summer 2023, the template policies have since proliferated across College syllabi. The Crimson analyzed AI policies from the 20 most popular courses at the College in fall 2025 that have been taught every fall since 2022 and have syllabi from each year available online. In fall 2022, none of the 20 sampled syllabi included any mention of AI or ChatGPT. Three years later, the numbers have flipped; all but two of the sampled courses now have policies regulating use of the technology.
Most of the sampled course policies allow students to use AI, at least to some extent. For instance, the syllabus for Stat 100 says the course “encourages students to explore the use of generative artificial intelligence” in order to “gain conceptual and theoretical insights, as well as assistance with coding.”
Six of the sampled courses — Chemistry 17, GenEd 1074, English 10, Life Sciences 20, Mathematics 55, and Spanish 10 — include outright bans on the use of AI. And the majority of the courses discourage AI use on at least some assignments.
Some faculty have even changed how they evaluate students to AI-proof their assessments. History professor Jesse E. Hoffnung-Garskof ’93 said he swapped out the final paper he used to assign in his undergraduate course on immigration law in favor of an oral exam.
“I realized that the assignment that we were using for many years was just too easy to create a response to with a large language model,” he said.
Making Harvard ‘AI-Resilient’
Three weeks into his popular undergraduate course on Bob Dylan, Classics professor Richard F. Thomas asked a generative AI model to produce lyrics that mimicked the famed artist, whom he calls a “modern classic.” The output, Thomas said, fell far short of Dylan’s masterpieces. But that was the point.
“It could never — it will never, in my view — be a substitute for what the human mind at its highest and most interesting level produces,” Thomas said.
Thomas isn’t the only Harvard professor adopting AI. Some instructors have rolled out chatbots that were fed specific content to tailor them to their courses. Computer Science 50 professor David J. Malan ’99 began offering a chatbot for CS50 students in fall 2023.
Since then, other popular undergraduate courses have followed suit. Economics lecturer Maxim Boycko, who teaches Economics 1010a: “Intermediate Microeconomics,” said he introduced a chatbot this fall so students can ask questions without worrying whether they are “asking a stupid question before your peers or in front of a tutor.” Life and Physical Sciences A: “Foundational Chemistry and Biology” also introduced a tutor chatbot in 2024.
Other instructors have asked students to use AI as homework. Peter K. Bol, a professor of East Asian Languages and Civilizations, asks students to complete a weekly AI assignment in his course Gen Ed 1136: “Power and Civilization: China.” One entailed using an AI platform to translate a centuries-old Chinese text, then ask the model follow-up questions to better understand the topic. Then the students discuss their experiences in class.
“Everyone is going off and doing something slightly different, and so they got exposed to each other’s ideas,” Bol said.
In some fields, instructors see training students in AI as an essential part of preparing them to conduct their own research. Statistics lecturer James G. Xenakis, who said he encourages his students to use the technology, said that no other technology has accelerated his research more than OpenAI’s GPT models because they can rapidly process and reinterpret data.
“My biggest concern with AI is that kids aren’t learning how to use it as well as they could be,” Xenakis said.
The Bok Center for Teaching and Learning, which provides pedagogical training and resources for instructors, has spent the past several years helping instructors craft AI tools, develop new assignments, and running workshops on how to effectively leverage AI for teaching.
One of the most popular requests from faculty has been help with creating AI tutor chatbots tailored to their courses, according to Madeleine Woods, a project lead for AI at the Bok Center. Lately, there has been a trend away from requests for all-purpose tutor chatbots toward more specialized applications of AI, like transcribing oral exams or debugging code, Woods said.
“Increasingly, we’re finding people can be frustrated with the quality of the output,” Woods said. “So people are kind of moving away from this anthropomorphic one-size-fits-all.”
The Bok Center has also fielded requests to make assignments AI-proof — though Woods said the center prefers the term “AI-resilient.” Those asks reflect how some professors remain deeply skeptical about whether AI belongs in the classroom.
For some faculty, the concern is that students will use AI to cheat. In The Crimson’s annual senior survey last spring, 30 percent of respondents said they had submitted AI-generated work as their own.
Some also said they were concerned that using AI would undermine the learning process for their students.
“It’s interesting to think about, just as Frankenstein’s monster is interesting to think about. But giving it a central role in education — at least, in humanities education — seems to me a terrible mistake,” English professor Deidre S. Lynch said. “A denial of everything that makes human beings humans.”
In his quantum field theory course, Physics professor Matthew D. Schwartz used to give take-home final exams. One problem asked students to compute the effect of supersymmetry on the muon magnetic moment. But AI platforms’ increasing capabilities have forced him to administer in-person exams instead — to the detriment of students’ learning.
“With a three hour in-class exam, we test memorization, speed, and to some extent luck. These skills are not well aligned with the skills our graduate students need to be successful in research,” Schwartz said.
Hoffnung-Garskof, the History professor, said he thinks Harvard students have been driven to AI not because they trust the work it produces, but because they’re juggling too many demands on their time — a common concern among faculty who worry that students are prioritizing extracurriculars over coursework.
“They feel overwhelmed. My sense here is that it’s about increasing the amount that you want to accomplish in a period of time,” Hoffnung-Garskof said.
“The Harvard students that I encounter don’t really think today AI can write a better paper than you all can,” he said. “Most of you are too committed to your own excellence to trust a machine.”
—Staff writer William C. Mao can be reached at william.mao@thecrimson.com. Follow him on X @williamcmao.
—Staff writer Veronica H. Paulus can be reached at veronica.paulus@thecrimson.com. Follow her on X @VeronicaHPaulus.
—Staff writer Victoria D. Rengel can be reached at victoria.rengel@thecrimson.com. Follow her on X @VictoriaRengel_.
Read more in News
Inside the Alumni Group Urging Harvard to Stand Up to Trump