{shortcode-bb529b90a151cf091b2cf7114fd35b0a05d03ae2}
Three Harvard faculty members debated whether generative artificial intelligence would be a useful tool — or a perilous shortcut — in scholarship and teaching at a March 13 panel.
The panel — the first in a four-part series evaluating the effects of AI on the Faculty of Arts and Sciences’ educational mission — was moderated by Sean D. Kelly, the dean of the FAS’ Arts and Humanities division.
Kelly probed the faculty panelists on where and why they thought AI use was appropriate, but he struck a largely bullish tone on his own use of AI as an interlocutor, saying it could be a valuable addition to conversations in the humanities.
“I have to confess at this moment that I use these generative AI tools actually almost every day for large portions of the day,” he said.
Even if generative AI cannot definitively interpret things like the role of ambition in Macbeth’s downfall, Kelly said, it can put options on the table — which students, instructors, and researchers can then use to develop their own conclusions.
“I love listening to its answers and trying to figure out what’s wrong with them, or how they make me think different things about what I ought to be exploring,” Kelly said. “It spurs you to further create a thought that you might not have had if you didn’t have this response.”
The panelists agreed that AI could open up new research frontiers, both in their fields and across disciplinary boundaries. Matthew Kopec — the program director of Embedded EthiCS, which creates modules on ethics for computer science courses at Harvard — said he thought AI was already advancing some fields of humanistic research.
“There are humanists in the room who have built big databases to build taxonomies of story themes or of bibliographical names in ancient Chinese literature,” said Kopec, who is a Philosophy lecturer. “I think that area of digital humanities is very rich.”
Michael P. Brenner, a professor of Applied Mathematics, Applied Physics, and Physics, said the availability of AI tools makes it easier for students to quickly solve algebra problems, rather than wading through them slowly or getting lost in calculations.
“They can try more things, so they will make discoveries faster,” he said. AI tools, he added, have “the potential to raise the level of classes” by allowing students to learn more advanced subject matter without needing to painstakingly master elementary techniques.
That creates a flip side, too, he said: If students can use AI to spit out answers, how will they ever learn those fundamental techniques?
But University Professor Gary King, a social scientist and statistician who holds Harvard’s highest faculty rank, said he thought that AI tools would allow scientists to discard outdated methods and approach new questions.
“We’re no good at arithmetic anymore. We don’t need to be,” King said. “I think that’s probably a good thing. It frees up our very limited cognitive capacity for other things.”
“You should be the kind of person that uses whatever the best tools are to meet the next set of problems,” he said.
But Kelly said that even if AI was an effective way to generate answers, that was not always the point. Instead, he said, students often learn skills to change themselves and to understand the answers they arrive at.
“Suppose I saw a student of mine running along the Charles River, and they were huffing and puffing,” Kelly said. “And I said, ‘heavens, why are you running? There’s a perfectly good motorized vehicle that can get you from A to B. It does it faster. It does it more efficiently. It does it better. Why don’t you just use that?’”
That question, Kelly said, would be misunderstanding why the student runs: “He’s not running to get from A to B, he’s running to transform himself. He’s running to change his body and his way of encountering the world.”
The panelists also discussed whether they thought it was appropriate for instructors to use AI to draft recommendation letters.
Kopec said he thought using AI to generate a first draft could subtly alter the tone of a letter. He said suggestions from predictive text in email applications and on smartphone keyboards could explain the prevalence — or, perhaps, overuse — of exclamation points in emails.
“Emails from 10 years ago had no exclamation points,” Kopec said. “Now, if you don’t have an exclamation point, someone’s going to check up on you — and so it actually does affect the tone.”
Brenner said he worried instead that AI-generated recommendation letters might sound every bit as convincing as letters hastily written by a professor, but would not actually provide an expert evaluation of their subjects’ academic work.
“I’m much less worried about the tone than about the content,” he said.
Several of the speakers said that the effects of generative AI use depended on users’ intent and expectations, not just the nature of the models themselves.
Kopec said users need to recognize that, when they use generative AI, “the model doesn’t really care whether what it’s putting out is true or false.”
“But I think a lot of people actually don’t have that in mind when they’re using these tools,” he added.
Kelly acknowledged that what drives him to use AI tools — his constant questioning — “might be different from what students would want.”
“I have this kind of problem where questions are on my mind all the time,” he said. “That’s why I am where I am. But my wife is so happy I’m not constantly asking her my questions.”
—Staff writer Ellen P. Cassidy can be reached at ellen.cassidy@thecrimson.com. Follow her on X at @ellenpcassidy.