Advertisement

Columns

ChatGPT Has Been My Tutor for the Last Year. I Still Have Concerns.

{shortcode-86174b6baaa8930df32fbe075112e10756445bda}

About one year ago, I became friends (or at least classmates) with ChatGPT, and wrote about my concerns.

I worried my education was becoming less personal, as I typed questions into a chatbot rather than asking peers or trained teaching staff. I emphasized the need for more structure around how generative AI tools like ChatGPT are permitted to be used in the classroom.

Since then, both the technology and its presence at Harvard have rapidly grown.

We now have GPT-5, dozens of AI-integrated tools, and a campus where, for better or worse, AI is a normal part of life. My initial concerns about the technology remain. But I’m now just as worried about our hodgepodge campus response. Generative AI should not be banned outright, nor should it be embraced without thought. What we really need is guidance, resources, and practice in how to utilize it wisely.

Advertisement

In my personal life, AI has become ubiquitous. It answers questions about data analysis, helps me debug code, and serves as a sounding board for my late-night thoughts when I don’t want to burden friends or family.

Professors have come to understand this new reality. Those who once discouraged AI use seem to have given up, recognizing that students will use it whether or not it’s officially allowed. The lack of definitive methods to determine whether work is AI-generated or influenced (to the dismay of the forlorn em dash) has led many classes to shift toward in-person exams and assignments to ensure we understand the material.

But the results have been inconsistent. In some classes, any AI use is considered dishonest. In others, it’s encouraged. Most fall somewhere in between, but no one has clearly defined where that “somewhere” is. What we’re missing in most classes isn’t rules; it’s education about how AI can enhance our learning rather than replace it. Without clear guidance, students who are resourceful with AI gain an advantage, while others either avoid it entirely or use it poorly. That gap isn’t about talent; it’s about whether we’ve been taught how to use a tool well.

Some professors include statements like, ‘ChatGPT should not write your essay.’ Most of us happily agree. But is it okay if it helps rephrase a sentence? Or if it points out a logical flaw? What if its rephrasing sounds like something you’d never write? Are you still the author? These aren’t theoretical questions. They’re the kinds of dilemmas students navigate every day, often without guidance. And they matter, not just for academic integrity, but for how we learn to interact with a tool that’s shaping our future.

If AI is going to be as transformative as the internet, then our education should prepare us to live alongside it. Rather than banning AI completely— or turning a blind eye — we ought to learn how to use it thoughtfully. Harvard, with initiatives like the Kempner Institute for the Study of Artificial and Natural Intelligence, is already a leader in AI research and development. It should also lead in AI literacy.

That means giving students a consistent framework to use AI responsibly, including the skills to recognize its blind spots and the judgment to know when to step away from it.

Courses should make room for trial and error. In the humanities, maybe that’s having AI draft essays or summarize readings for students to critique; in science classes, it might be generating and then refining AI-proposed code or solutions. At a school that trains us in everything from library research to community health, it’s surprising we don’t yet widely teach students how to engage with generative AI.

This transition will be messy, and no policy will be perfect. But if we don’t agree on how to talk about AI, when to use it, and when to avoid it, we risk creating fragmented and inequitable experiences instead of preparing students for an AI-driven future.

One year later, I still think about the “friend” I made in ChatGPT. I still use it, and I still worry about how it shapes my education. But like any friendship, it works best with boundaries. Now it’s time for Harvard and other institutions to teach us how to set those boundaries wisely.

Sandhya Kumar ’26, a Crimson Editorial editor, is a double concentrator in Molecular & Cellular Biology and Statistics in Winthrop House.

Tags

Advertisement