{shortcode-0daa92d7782fc4f61a0c7ebd6ae62f6cbaa958ef}
In the depths of fall term finals, having completed a series of arduous exams, one student was exhausted. The only thing between her and winter break was a timed exam for a General Education class that she was taking pass-fail. Drained, anxious and feeling “like a deflated balloon,” she started the clock. The exam consisted of two short essays. By the time she finished the first one, she had nothing left to give. To pass the class, all she needed was to turn in something for the second essay. She had an idea.
Before finals started, her friends had told her about ChatGPT, OpenAI’s free new chatbot which uses machine learning to respond to prompts in fluent natural language and code. She had yet to try it for herself. With low expectations, she made an account on OpenAI’s website and typed in the prompt for her essay. The quality of the results pleasantly surprised her. With some revision, she turned ChatGPT’s sentences into her essay. Feeling guilty but relieved, she submitted it: Finally, she was done with the semester.
This student and others in this article were granted anonymity by The Crimson to discuss potential violations of Harvard’s Honor Code and other policies out of concerns for disciplinary action.
Since its Nov. 30, 2022 release, ChatGPT has provoked awe and fear among its millions of users. Yet its seeming brilliance distracts from important flaws: It can produce harmful content and often writes fiction as if it were fact.
Because of these limitations and the potential for cheating, many teachers are worried about ChatGPT’s impact on the classroom. Already, the application has been banned by school districts across the country, including those of New York City, Seattle, and Los Angeles.
These fears are not unfounded. At Harvard, ChatGPT quickly found its way onto students’ browsers. In the midst of finals week, we encountered someone whose computer screen was split between two windows: on the left, an open-internet exam for a statistics class, and on the right, ChatGPT outputting answers to his questions. He admitted that he was also bouncing ideas for a philosophy paper off the AI. Another anonymous source we talked to used the chatbot to complete his open-internet Life Sciences exam.
But at the end of the fall term, Harvard had no official policy prohibiting the use of ChatGPT. Dean of the College Rakesh Khurana, in a December interview with The Crimson, did not view ChatGPT as representing a new threat to education: “There have always been shortcuts,” he said. “We leave decisions around pedagogy and assignments and evaluation up to the faculty.”
On Jan. 20, Acting Dean of Undergraduate Education Anne Harrington sent an email to Harvard educators acknowledging that ChatGPT’s abilities have “raised questions for all of us.”
In her message, Harrington relayed guidance from the Office of Undergraduate Education. At first glance, the language seemed to broadly prohibit the use of AI tools for classwork, warning students that Harvard’s Honor Code “forbids students to represent work as their own that they did not write, code, or create,” and that “Submission of computer-generated text without attribution is also prohibited by ChatGPT’s own terms of service.”
But, the email also specified that instructors could “use or adapt” the guidance as they saw fit, allowing them substantial flexibility. The guidance did not clarify how to view the work of students who acknowledge their use of ChatGPT. Nor did it mention whether students can enlist ChatGPT to give them feedback or otherwise supplement their learning.
Some students are already making the most of this gray area. One student we talked to says that he uses ChatGPT to explain difficult mathematical concepts, adding that ChatGPT explains them better than his teaching fellow.
Natalia I. Pazos ’24 uses the chatbot as a kind of interactive SparkNotes. After looking through the introduction and conclusion of a dense Gen Ed reading herself, she asks ChatGPT to give her a summary. “I don’t really have to read the full article, and I feel like it gives me sometimes a better overview,” she says.
Professors are already grappling with whether to ban ChatGPT or let students use it. But beyond this semester, larger questions loom. Will AI simply become another tool in every cheater’s arsenal, or will it radically change what it means to learn?
‘Don’t Rely on Me, That’s a Crime’
Put yourself in our place: It’s one of those busy Saturdays where you have too much to do and too little time to do it, and you set about writing a short essay for History 1610: “East Asian Environments.” The task is to write about the impact of the 2011 earthquake, tsunami, and subsequent nuclear meltdown in Japan. In a database, you encounter an image of a frozen clock in a destroyed school. Two precious hours pass as you research the school, learning about the ill-prepared evacuation plans and administrative failures that led to 74 children’s deaths. As you type up a draft, your fingers feel tense. It’s a harrowing image — you can’t stop envisioning this clock ticking down the moments until disaster. After spending six hours reading and writing, you finally turn the piece in.
But what if the assignment didn’t have to take so much time? We tried using ChatGPT to write this class essay (which, to be clear, was already turned in). After a minute or two refining our prompts, we ended up with a full essay, which began with:
Ticking away the moments of a typical school day, the clock on the wall of Okawa Elementary School suddenly froze on March 11, 2011, as the world around it was shattered by a massive earthquake and tsunami. The clock, once a symbol of the passage of time, now stood as a haunting reminder of the tragedy that had struck the school.
In less than five minutes, ChatGPT did what originally took six hours. And it did it well enough.
{shortcode-949aa16ae3760bae32e99b8815745032726fa877}
Condensing hours into minutes is no small feat, and an enticing prospect for many. Students have many different demands on their time, and not everyone puts academics first. Some pour their time into intense extracurriculars, pre-professional goals, or jobs. (People also want to party.)
Yet the principle underlying a liberal arts curriculum — one that’s enshrined in Harvard’s mission — is the value of intellectual transformation. Transforming yourself is necessarily difficult. Learning the principles of quantum mechanics or understanding what societies looked like before the Industrial Revolution requires deconstructing your worldview and building it anew. Harvard’s honor code, then, represents not just a moral standard but also an expectation that students go through that arduous process.
That’s the theory. In practice, many students feel they don’t always have the time to do the difficult work of intellectual transformation. But they still care about their grades, so they cut corners. And now, just a few clicks away, there’s ChatGPT: a tool so interactive it practically feels like it’s your own work.
So, can professors stop students from using ChatGPT? And should they?
This semester, many instructors at Harvard prohibited students from using ChatGPT, treating it like any other form of academic dishonesty. Explicit bans on ChatGPT became widespread, greeting students on syllabi for classes across departments, from Philosophy to Neuroscience.
Some instructors, like professor Catherine A. Brekus ’85, who teaches Religion 120: “Religion and Nationalism in the United States: A History,” directly imported the Office of Undergraduate Education’s suggested guidance onto their syllabus. Others, like Spanish 11, simply told students not to use it in an introductory lecture. The syllabus for Physical Sciences 12a went so far as to discourage use of the tool with multiple verses of a song written by ChatGPT:
I’m just a tool, a way to find some answers
But I can’t do the work for you, I’m not a dancer
You gotta put in the effort, put in the time
Don’t rely on me, that’s a crime
Making these professors’ lives difficult is that, at the moment, there is no reliable way to detect whether a student’s work is AI-generated. In late January, OpenAI released a classifier to distinguish between AI and human-written text, but it only correctly identified AI-written text 26 percent of the time. GPTZero, a classifier launched in January by Princeton undergraduate Edward Tian, now claims to correctly identify human-written documents 99 percent of the time and AI-written documents 85 percent of the time.
Still, a high likelihood of AI involvement in an assignment may not be enough evidence to bring a student before the Honor Council. Out of more than a dozen professors we’ve spoken with, none currently plan to use an AI detector.
Not all instructors plan to ban ChatGPT at all. Incoming assistant professor of Computer Science Jonathan Frankle questions whether students in advanced computer science classes should be forced to use older, more time-consuming tools if they’ve already mastered the basics of coding.
“It would be a little bit weird if we said, you know, in CS 50, go use punch cards, you’re not allowed to use any modern tools,” he says, referring to the tool used by early computer scientists to write programs.
Harvard Medical School professor Gabriel Kreiman feels similarly. In his courses, students are welcome to use ChatGPT, whether for writing their code or their final reports. His only stipulation is that students inform him when they’ve used the application and understand that they’re still responsible for the work. “If it’s wrong,” he says, “you get the grade, not ChatGPT.”
Kumaresh Krishnan, a teaching fellow for Gen Ed 1125: “Artificial & Natural Intelligence,” believes that if the class isn’t focused on how to code or write, then ChatGPT use is justified under most circumstances. Though he is not responsible for the academic integrity policy of the course, Krishnan believes that producing a nuanced, articulate answer with ChatGPT requires students to understand key concepts.
“If you’re using ChatGPT that well, maybe you don’t understand all the math behind it, maybe you don’t understand all the specifics — but you're understanding the game enough to manipulate it,” he says. “And that itself, that’s a win.”
The student that used ChatGPT for an open-internet life sciences exam last semester says he had mastered the concepts but just couldn’t write fast enough. ChatGPT, he says, only “fleshed out” his answers. He received one of the highest grades in the class.
While most of the teachers we spoke with prohibit the use of ChatGPT, not everyone has ruled out using it in the future. Harvard College Fellow William J. Stewart, in his course German 192: “Artificial Intelligences: Body, Art, and Technology in Modern Germany,” explicitly forbids the use of ChatGPT. But for him, the jury is still out on ChatGPT’s pedagogical value: “Do I think it has a place in the classroom? Maybe?”
‘A Pedagogical Challenge’
“There are two aspects that we need to think about,” says Soroush Saghafian when asked about ChatGPT. “One is that, can we ban it? Second, should we ban it?” To Saghafian, an associate professor of public policy at the Kennedy School who is teaching a course on machine learning and big data analytics, the answer to both questions is no. In his view, people will always find ways around prohibitive measures. “It’s like trying to ban use of the internet,” he says.
Most educators at Harvard who we spoke with don’t share the sense of panic that permeates the headlines. Operating under the same assumption as Saghafian — that it is impossible to prevent students from using ChatGPT — educators have adopted diverse strategies to adapt their curricula.
In some language classes, for example, threats posed by intelligent technology are nothing new. “Ever since the internet, really, there have been increasingly large numbers of things that students can use to do their work for them,” says Amanda Gann, an instructor for French 50: “Advanced French II: Justice, Equity, Rights, and Language.”
Even before the rise of large language models like ChatGPT, French 50 used measures to limit students’ ability to use tools like Google Translate for assignments. “The first drafts of all of their major assessments are done in class,” Gann says.
{shortcode-adcf57896a4eb39a9fa097f5e0bbe70e4c41bf97}
Still, Gann and the other instructors made additional changes this semester in response to the release of ChatGPT. After writing first drafts in class, French 50 students last semester revised their papers at home. This spring, students will instead transform their draft composition into a conversational video. To ensure that students don’t write their remarks beforehand — or have ChatGPT write them — the assignment will be graded on how “spontaneous and like fluid conversation” their speech is.
Instructors were already considering an increased emphasis on oral assessments, Gann says, but she might not have implemented it without the pressure of ChatGPT.
Gann welcomes the change. She views emergence of large language models like ChatGPT as a “pedagogical challenge.” This applies both to making her assignments less susceptible to AI — “Is this something only a human could do?” — as well as reducing the incentive to use AI in the first place. In stark contrast to the projected panic about ChatGPT, Gann thinks the questions it has posed to her as an educator “make it kind of fun.”
Stewart thinks that ChatGPT will provide “a moment to reflect from the educator’s side.” If ChatGPT can do their assignments, perhaps their assignments are “uninspired, or they’re kind of boring, or they’re asking students to be repetitive,” he says.
Stewart also trusts that his students see the value in learning without cutting corners. In his view, very few of the students in his high-level German translation class would “think that it’s a good use of their time to take that class and then turn to the translating tool,” he says. “The reason they’re taking that class is because they also understand that there’s a way to get a similar toolbox in their own brain.” To Stewart, students must see that developing that toolbox for themselves is “far more powerful and far more useful” than copying text into Google Translate.
Computer Science professor Boaz Barak shares Stewart’s sentiment: “Generally, I trust students. I personally don't go super out of my way to try to detect student cheating,” he says. “And I am not going to start.”
Frankle, too, won’t be going out of his way to detect whether his students are cheating — instead, he assumes that students in his CS classes will be using tools like ChatGPT. Accordingly, he intends to make his assignments and exams significantly more demanding. In previous courses, Frankle says he might have asked students to code a simple neural network. With the arrival of language models that can code, he’ll ask that his students reproduce a much more complex version inspired by cutting-edge research. “Now you can get more accomplished, so I can ask more of you,” he says.
Other courses may soon follow suit. Just last week, the instructor for CS 181: “Machine Learning,” offered students extra credit if they used ChatGPT as an “educational aid” to support them in actions like debugging code.
Educators across disciplines are encouraging students to critically engage with ChatGPT in their classes.
Harvard College Fellow Maria Dikcis, who teaches English 195BD: “The Dark Side of Big Data,” assigned students a threefold exercise — first write a short analytical essay, then ask ChatGPT to produce a paper on the same topic, and finally compare their work and ChatGPT’s. “I sort of envisioned it as a human versus machine intelligence,” she says. She hopes the assignment will force students to reflect on the seeming brilliance of the model but also to ask, in her words, “What are its shortcomings, and why is that important?”
Saghafian also thinks it is imperative that students interact with this technology, both to understand its uses as well as to see its “cracks.” In the 2000s, teachers helped students learn the benefits and pitfalls of internet resources like Google search. Saghafian recommends that educators use a similar approach with ChatGPT.
And these cracks can be easy to miss. When she first started using ChatGPT to summarize her readings, Pazos recalls feeling “really impressed by how fast it happened.” To her, because ChatGPT displays its responses word by word, “it feels like it’s thinking.”
“One of the hypes about this technology is that people think, oh, it can do everything, it can think, it can reason,” Saghafian says. Through critical engagement with ChatGPT, students can learn that “none of those is correct.” Large language models, he explains, “don't have the ability to think.” Their writing process, in fact, can show students the difference between reasoning and outputting language.
A Troublesome Model
In the Okawa elementary school essay written by ChatGPT, one of the later paragraphs stated: The surviving students and teachers were quickly evacuated to safety.
In fact, the students and teachers were not evacuated to safety. They were evacuated toward the tsunami — which was exactly why Okawa Elementary School became such a tragedy. ChatGPT could describe the tragedy, but since it did not understand what made it a tragedy, it spat out a fundamental falsehood with confidence.
This behavior is not out of the ordinary. ChatGPT consistently makes factual errors, even though OpenAI designed it not to and has repeatedly updated it. And that’s just the tip of the iceberg. Despite impressive capabilities, ChatGPT and other large language models come with fundamental limitations and potential for harm.
Many of these flaws are baked into the way ChatGPT works. ChatGPT is an example of what researchers call a large language model. LLMs work primarily by processing huge amounts of data. This is called training, and in ChatGPT’s case, the training likely involved processing most of the text on the internet — an ocean of niche Wikipedia articles, angry YouTube comment threads, poorly written Harry Potter fan fiction, recipes for lemon poppy seed muffins, and everything in between.
Through that ocean of training data, LLMs become adept at recognizing and reproducing the complex statistical relationships between words in natural language. For ChatGPT, this might mean learning what types of words appear in a Wikipedia article as opposed to a chapter of fanfiction, or what lists of ingredients are most likely to follow the title “pistachio muffins.” So, when ChatGPT is given a prompt, like “how do I bake pistachio muffins,” it uses the statistical relationships it has learned to predict the most likely response to that prompt.
{shortcode-d43d1e5460ada38df4e1f937100546c79aaace53}
Occasionally, this means regurgitating material from its training set (like copying a muffin recipe) or adapting a specific source to the prompt (like summarizing a Wikipedia article). But more often, ChatGPT synthesizes its responses from the correlations it has learned between words. This synthesis is what gives it the uncanny yet hilarious ability to write the opening of George Orwell’s “Nineteen Eighty-Four” in the style of a SpongeBob episode, or explain the code for a Python program in the voice of a wiseguy from a 1940s gangster movie.
This explains the propensity of LLMs to produce false claims even when asked about real events. The algorithms behind ChatGPT have no conception of truth — only of correlations between words. Moreover, the distinction between truth and falsehood on the written internet is rarely clear from the words alone.
Take the Okawa Elementary School example. If you read a blog post about the effects of a disastrous earthquake on an elementary school, how would you determine whether it was true? You might consider the plausibility of the story, the reputability of the writer, or whether corroborating evidence, like photographs or external links, were available. Your decision, in other words, would not depend solely on the text of the post. Instead, it would be informed by digital literacy, fact-checking, and your knowledge of the outside world. Language models have none of that.
The difference between fact and fiction is not the only elementary concept left out of ChatGPT’s algorithm. Despite its ability to predict and reproduce complex patterns of writing, ChatGPT often cannot parse comparatively simple logic. The technology will output confident-sounding incorrect answers when asked to solve short word problems, add two large numbers, or write a sentence ending with the letter “c.” Questioning its answer to a math problem may lead it to admit a mistake, even if there wasn’t one. Given the list of words: “ChatGPT” “has” “endless” “limitations,” it told us that the third-to-last-word on that list was: “ChatGPT.” (Narcissistic much?)
When James C. Glaser ’25 asked ChatGPT to compose a sestina — a poetic form with six-line stanzas and other constraints — the program outputted stanzas with four lines, no matter how explicit he made the prompt. At some point during the back-and-forth, he says, “I just sort of gave up and realized that it was kind of ridiculous.”
Lack of sufficient training data in certain areas can also affect ChatGPT’s performance. Multiple faculty members who teach languages other than English told us ChatGPT performed noticeably worse in those languages.
The content of the training data also matters. The abundance of bias and hateful language on the internet filters into the written output of LLMs, as leading AI ethics researchers such as Timnit Gebru have shown. In English language data, “white supremacist and misogynistic, ageist, etc., views are overrepresented,” a 2021 study co-authored by Gebru found, “setting up models trained on these datasets to further amplify biases and harms.”
Indeed, OpenAI’s GPT-3, a predecessor of ChatGPT that powers hundreds of applications today, is quick to output paragraphs with racist, sexist, anti-semitic, or otherwise harmful messages if prompted, as the MIT Technology Review and others have shown.
Because OpenAI has invested heavily in making these outputs harder to reproduce for ChatGPT, ChatGPT will often refuse to answer prompts deemed dangerous or harmful. These barriers, however, are easily sidestepped, leading some to point out that AI technology could be used to manufacture fake news and hateful, extremist content.
In order to reduce the likelihood of such outputs, OpenAI feeds explicitly labeled examples of harmful content into its LLMs. This might be effective, but it also requires humans to label thousands of examples, often by reading through nightmarish material to decide whether it qualifies as harmful.
As many other AI companies have done, OpenAI reportedly chose to outsource this essential labor. In January, Time reported that OpenAI had contracted out the labeling of harmful content to Kenyan workers paid less than $2 per hour. Multiple workers recalled encountering horrifying material in their work, Time reported.
“Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content,” an OpenAI spokesperson told Time.
Even with the viral popularity of ChatGPT and a new $10 billion investment from Microsoft, legal issues loom over OpenAI. If, in some sense, large language models merely synthesize text from across the internet, does that mean they are stealing copyrighted material?
Some argue that OpenAI’s so-called breakthrough might be illegal. A class action lawsuit filed just weeks before the release of ChatGPT alleges that OpenAI’s Codex, a language model optimized for writing code, violated the licenses of thousands of software developers whose code was used to train the model.
This lawsuit could open the gates for similar proceedings against other language models. Many believe that OpenAI and other tech giants train AI systems using massive datasets indiscriminately pulled from the internet, meaning that large language models might be stealing or repurposing copyrighted and potentially private material that appears in their datasets without licensing or attribution.
If OpenAI could be sued for Codex, the same logic would likely apply to ChatGPT. In the past year, OpenAI doubled the size of their legal team. “This might be the first case,” said Matthew Butterick, one of the attorneys representing the software developers, in an interview with Bloomberg Law, “but it will not be the last.”
OpenAI did not respond to a request for comment.
As ChatGPT and LLMs grow more popular, the question of what to do about these flaws only becomes more pressing.
‘A Life Without Limits’
When you’re watching a disembodied green icon spit out line after line of articulate, seemingly original content, it’s hard not to feel like you’re living in the future. It’s also hard not to worry that this technology’s capabilities will render your education obsolete.
So will ChatGPT transform learning as much as the hype would have us believe?
It’s undeniable that ChatGPT and other LLMs — through their ability to generate readable paragraphs and functioning programs — are revolutionary technology. But in their own way, so were calculators, the internet, Google search, Wikipedia, and Google Translate.
Every professor we talked to cited at least one of these tools as having catalyzed a similar paradigm shift within education. German and Comparative Literature professor John T. Hamilton likens ChatGPT to an “interactive Wikipedia.” Saghafian, the HKS professor, views it as playing a similar role to Google.
People have been adapting to these technologies for decades. Children growing up in the 2000s and 2010s were told, “Don’t trust everything you see on the internet.” Gradually, they became digitally literate. They saw the value, for example, in using Wikipedia as a starting point for research, but knew never to cite it.
Like Google and Wikipedia in their earliest stages, people are currently using ChatGPT to cut corners. But as experts highlight its flaws, teachers are beginning to promote a kind of AI literacy. (This would prove essential if an LLM professes its love for you or says it will hack you, as the AI-powered Bing Chat did to Kevin Roose in his New York Times article.)
To Barak, the computer science professor, a liberal arts education can help prepare students for an uncertain future. “The main thing we are trying to teach students is tools for thinking and for adapting,” Barak says. “Not just for the jobs that exist today, but also for the jobs that will exist in 10 years.”
While ChatGPT currently can’t follow simple logic, tell true from false, or write complex, coherent arguments, what about in a year? A decade? The amount of computing power devoted to training and deploying machine learning applications has grown exponentially over the past few years. In 2018, OpenAI’s state-of-the-art GPT-1 model had 100 million parameters. By 2020, the number of parameters in GPT-3 had grown to 175 billion. With this pace of change, what new abilities might GPT-4 — OpenAI’s rumored next language model — have? And how will universities, not to mention society as a whole, adapt to this emerging technology?
Some instructors are already imagining future uses for AI that could benefit students and teachers alike.
“What I’d love to see is, for example, someone to make a French language chatbot that I could tell my students to talk to,” Gann, the French instructor, says. She says an app that could give students feedback on their accent or pronunciation would also be useful. Such technology, she explains, would allow students to improve their skills without the expensive attention of a teacher.
Saghafian believes that ChatGPT could act as “a sort of free colleague” that students could talk to.
Silicon Valley researchers and machine learning professors don’t know where the field is heading, but they are convinced that it’ll be big.
“I do believe there is going to be an AI revolution,” says Barak. In his view, AI-based tools will not make humans redundant, but rather change the nature of jobs on the scale of the industrial revolution.
As such, it’s impossible to predict exactly what the AI-powered future will look like. It would be as difficult as trying to predict what the internet would look like “in 1993,” says Frankle, the incoming CS professor.
Underlying these claims — and the perspectives of many professors we talked to — is an assumption that the cat is out of the bag, that AI’s future has already been set in motion and efforts to shape it will be futile.
Not everyone makes this assumption. In fact, some believe that shaping AI’s future is not only possible, but vital. “What’s needed is not something out of science fiction — it’s regulation, empowerment of ordinary people and empowerment of workers,” wrote University of Washington professor Emily M. Bender in a 2022 blog post.
Thus far, the AI industry has faced little regulation.
However, some fear any form of constraint could stifle progress. When asked for specific ideas about regulating AI, Saghafian, the public policy professor, muses that he wouldn’t want policymakers “to be too worried about the negative sides of these technologies, so that they end up blocking the future, positive side of it.”
In a regulation-free environment, Silicon Valley companies may not prioritize ethics or public knowledge. Frankle, who currently builds language models like ChatGPT as the chief scientist for an AI startup called MosaicML, explains how at startups, the incentive is not “to publish and share knowledge” — that’s a side bonus — but rather, “to build an awesome product.”
Hamilton, however, urges caution. Technology empowers us to live as easily and conveniently as possible, he explains, to live without limits: we can fly across the world, read any language just by pointing our smartphones at it, or learn any fact by tapping a few words into Google.
But limits, Hamilton says, are ultimately what allow us to ascribe meaning within our lives. We wouldn’t care about gold if it was plentiful, he points out, and accordingly, we wouldn’t care much about living if we lived forever. “We care because we’re so limited,” Hamilton says. “A life without limits is ultimately a life without value.”
As we continue to create more powerful technology, we may not only lose sight of our own limits, but also become dependent on our creations.
For instance, students might be tempted to rely on ChatGPT’s outputs for critical thinking. “That’s great,” Hamilton says. “But am I losing my ability to do precisely that for myself?”
{shortcode-4d23875978c9d5b42b4a550041d37d2dc70cc58d}
We think back to the Okawa Elementary School essay. ChatGPT’s version wasn’t just worse than the student-written one because it repeated cliched phrases, lacked variation in its sentence structure, or concluded by saying “in conclusion.”
ChatGPT’s draft was worse because ChatGPT did not understand why what transpired at Okawa Elementary School was a tragedy. It did not spend hours imagining such an unfathomable chain of events. It did not feel the frustration of its initial expressions falling short, nor did it painstakingly revise its prose to try to do it justice.
ChatGPT didn’t feel satisfied when, after such a process, it had produced a work approaching what it wanted. It did not feel fundamentally altered by its engagement with the cruel randomness of human suffering. It did not leave the assignment with a renewed gratitude for life.
ChatGPT, in other words, did not go through the human process of learning.
If we asked ChatGPT to write us a longform article about ChatGPT and the future of education, would it be worth reading? Would you learn anything?
— Associate Magazine Editor Hewson Duffy can be reached at hewson.duffy@thecrimson.com.
— Magazine writer Sam E. Weil can be reached at sam.weil@thecrimson.com.