Advertisement

Columns

One Step Forward, and the Luddites’ Risk: Generative AI at Harvard

{shortcode-91e1c944d352acf0f29e5488c5e3211f6f988705}

I was heartened when Harvard purchased ChatGPT Edu accounts, granting undergraduates access to one of the best large language models at no cost.

That amity was immediately set back when I started looking at course syllabi.

It is unclear to me if many undergraduates will reap much benefit from this investment. A great number of faculty have chosen to adopt course policies banning wholesale the use of generative AI. This is an exceptionally sad state of affairs.

The syllabus for one of Harvard’s largest courses states succinctly, “The use of generative artificial intelligence (GAI) tools such as ChatGPT is not permitted for course assignments.” It is certainly not alone: I have seen this same type of language in syllabi for courses across the humanities and social sciences.

Advertisement

Predictably, faculty in some disciplines have been quicker to embrace the arrival of powerful foundation models.

“We ask that you do not use LLMs to substitute for the cognitive effort of engaging with the course material,” the syllabus for my machine learning class reads, and asks students to be prepared to explain their work without assistance, including during class discussions.

I admire how deeply this course’s wonderful professor must have thought about the best pedagogical approach to contend with advancements in LLMs. And I would even say that her approach is the right one — for all courses.

All instructors should adopt policies permitting any generative AI use on all assignments and projects with the caveat that students be prepared to discuss their work without assistance. Courses would also benefit from asking students not to replace meaningful learning with these models.

Students should be forewarned of the risk of generative models’ output closely mirroring the data on which they were trained, which could potentially produce plagiarism in academic work, and hence makes these models ill-suited as writers — not to mention the fact that they are objectively poor writers to begin with.

Tests may also be a setting where the prohibition on generative AI is justified: I fully understand the need to test students’ active recall and that it is therefore reasonable to prohibit generative AI use during this type of assessment.

The great advancements in the capabilities of foundation models, especially LLMs, point toward an exciting future for machine learning and its transformation of the way we work. Already, college students — when professors will allow them — can outsource time-consuming, rote tasks like conducting some parts of literature review or writing boilerplate code, and focus on more meaningful, fulfilling work. This makes life better and happier.

I empathize with anyone, including faculty, who are still adjusting to the rapidly evolving landscape shaped by AI in their fields. However, generative models will only continue to grow more advanced. Professors who insist on following in the Luddites’ footsteps, refusing to engage with these models, are effectively admitting that their field has become trivialized in the face of ever more powerful LLMs. And what will the academic community make of these professors’ research?

I once went to office hours for a machine learning class to ask for the professor’s feedback on my course project ideas. After sharing her insight about my proposals, the professor gave me advice I wasn’t expecting. She suggested that I bounce my ideas off of an LLM to help improve them before returning to discuss them again with her and that it would be wise to learn to adapt to working with LLMs given advancements in the space.

This incredible professor’s wisdom inspires me, and should inspire all faculty as they think about the role of generative AI in the classroom.

If Harvard wishes to continue to lead in the centuries to come, it must embrace the change brought by LLMs — in both teaching and research. The importance of foundation models will demand continual investment in compute cluster resources and faculty in this area of study.

We already have many wonderful faculty members working in machine learning, and Harvard has made admirable investments in compute cluster resources. But ever larger investments from peer institutions and industry will inevitably demand more for Harvard to stay competitive in this realm.

University administrators searching for a path to easy fundraising for this cause should look first to cuts in the areas and positions that openly admit they are unwilling or unable to adapt to a changing era.

Our society asks all of us to adapt in the face of change. To take resources from outmoded teaching and research and redirect them to more courses and compute cluster resources for students and faculty in machine learning and artificial intelligence would do the academy and society a lot of good.

Those who resist change play a treacherous game. The future is here. Have the courage to embrace it.

Ian M. Moore ’26, a Crimson Editorial editor, is an Applied Mathematics concentrator in Quincy House

Tags

Advertisement