ChatGPT: A Modern Day Guillotine



Though artificial intelligence is advertised as an extension of our capabilities, it is, in reality, the amputation of our minds.



“What are you going to do with that?” asks my uncle, asks my aunt, asks the family friend I used to know back when I was too young to remember.

It’s the question most humanities majors have been fielding since they started college. What are you going to do with that? How is that going to translate to a Job?

Just two weeks ago, Crimson writer Hannah W. Duane covered a talk given by Merve G. Emre ’07, a prominent New Yorker critic. Initially excited to hear Emre discuss the importance of literary criticism as both a field of study and a career, she found that Emre made no such claims. Duane, somewhat dismayed, was ultimately left with a new question: “Why are we so bad at defending the humanities?”

To me, the answer is simple. Whereas the humanities place emphasis on research, study, and analysis — i.e. on the process of creation — in our current culture, these values have been replaced by a fixation on efficiency.

In November of 2023, Bill Gates published a piece on his website, Gates Notes, that describes a utopian world in which AI is “a personal assistant for everyone.” These “agents” will change the way we practice healthcare and education, he argues, but also the ways in which we experience our daily lives.

Your agent, he imagines, would know you as well or better than your closest friends, and would use this personal knowledge to make recommendations on everything from products to buy to people to reach out to. Then, it’d help you act on it – suggesting, for example, that you send flowers to your girlfriend on her birthday, and then sending them.

“They offer to provide what they think you need,” wrote Gates. “Although you will always make the final decisions.”

But will we?

Though Gates’ agents are not yet reality, ChatGPT and other AI models are getting close. In the article, Gates lauds the existing systems that recommend entertainment options, like Spotify’s AI-powered DJ or the recently developed Pix, as well as those that provide productivity tools, like Google’s Assistant with Bard and its ability to “turn a written document into a slide deck” or “summarize email threads.”

The difference, he says, is that future agents will be even better at summarizing: even better at condensing; and even better at tailoring recommendations to our predetermined interests. Which is to say, they will be even better at shortening the time we spend planning, researching, discovering, doing the work, or – more ominously – thinking at all.

The last time I visited home, I peaked over my younger sister’s shoulder as she worked on an essay that was due the next morning. Rather than the mess of opened tabs, Google Docs, and frazzled, fragmented writing I had expected to find, she was, instead, calmly inputting the essay prompt to the AI bot on her Snapchat account, copying its response from her phone to her computer.

Whether it’s using self-checkout, wearing headphones in public, or using AI as a co-writer, I am not here to make moral judgments on anyone’s use or disuse of technology. But I do find our growing reliance on, and even love of these tools somewhat terrifying.

The argument for AI is that it will make information more accessible, our work more productive, and our lives more efficient. It will reduce the time between idea conception and physical realization, and lead to unprecedented economic growth. But in truth, AI makes it harder for us to conceive of ideas. It makes it more difficult for us to think.

Across subjects, new ideas come from the friction between other, often seemingly unrelated ones, bumping against each other in our heads. We learn how to think critically, how to reason empirically and quantitatively, through this same friction. It is the struggle towards a conclusion that teaches and enriches us, not the conclusion itself.

ChatGPT, on the other hand, produces something in an instant, without any work required.

In my previous column, I talked about the media theorist Marshall McLuhan, and his thought that every technological advancement is also an amputation of some other thing.

If, as I suggested then, social media threatens to amputate some part of our souls, then ChatGPT and other AI models represent a potential amputation of our minds.


This amputation is not just metaphorical. In the past couple of years, several studies have warned that AI can actually physically shrink our brains. In 2024, the phenomenon was named AI-chatbots Induced Cognitive Atrophy, or AICICA, to refer to the cognitive decline caused by an overreliance on artificial intelligence.

In April of this year, Sam Schechner, a writer covering technology at the Wall Street Journal, published an essay describing how AI was making him “stupid,” distancing him from his creativity and the French proficiency he had spent years building.

When Schechner spoke with experts like Robert Sternberg, a Cornell University Professor of Psychology studying human creativity and intelligence, he was warned that “AI has already taken a toll on both.”

In February of 2024, researchers from Microsoft and the University College London, published a paper titled “Ironies of Generative AI: Understanding and mitigating productivity loss in human-AI interactions.” The report came to the conclusion that the AI-induced shift from “active production (e.g., writing code) to passive evaluation (e.g., reviewing code),” can actually “stifle human performance,” and reduce productivity, as workers become unequipped to problem solve on their own.

Tech convenience culture places primary importance on utility and functionality, and tends to ignore the things that don’t directly increase efficiency. Similarly to how the calculator has stripped long division of its utility, AI now threatens to make many practical skills essentially worthless.

In recent years, multiple studies have come out describing the ways in which new technologies like social media, GPS, Chat and even Google, are negatively impacting our patience, our memory, our desire to master skills, or to get better at something gradually, over time.

But this sort of convenience bypasses the processes that make us able to think for ourselves. Some part of our humanity is broadened by stumbling through a novel, not quite grasping every plot point or theme right away, memorizing and reciting a poem, or even learning how to code. There’s some connection to reality that is weakened the more we rely on buffers between us and the hard work of learning for ourselves.

On a recent call, my dad described how the programs on the original Apple computers worked, and how he could even write some.

“I have no idea how my iMac works, now,” he told me. “So for me, [my computer] isn’t science anymore — it’s magic. And that doesn’t seem healthy.”

If you are lacking even a basic understanding of how things work, it’s easy to be convinced that they work a different way. This is true for almost everything, from computer science to health care, teaching to government, writing to mathematics. The less we learn about the world for ourselves, the wider we open the door for conspiracy theories, fake science, or just choosing feeling over fact.

In 2023, the Pew Research Center gathered expert opinions on “The Future of Human Agency.” They found that many believed that, despite advertising, AI models would not be designed to allow humans to “make the final decisions” as Gates suggested earlier. These experts cited both the fact that the dominant AI platforms are currently run by capitalist and authoritarian elites who have “little incentive to design them to allow individuals to exert more control” and that many humans will simply allow the algorithms to make decisions for them anyway, as we have already begun to do.

From curated reels on Instagram to Netflix’s “recommended for you,” we have grown used to being told what we like, and less and less likely to stray from our lane.

Bill Gates envisions a future in which we have agents. But the cost of outsourcing our choices is, ultimately, our agency.

The solution, as purported by AI doomsayers and advocates alike, must come from a renewed belief in human intelligence.

For decades, we have largely allowed ourselves to be moved by the current of technological advancement towards ever greater convenience, access, and distraction, no matter how much shallower it makes our lives.

Tech “bros” around the world want you to believe that this is the way things must be from now on; that you must adapt to the tech revolution and blindly accept the new reality as it approaches. But these tech-savants are not the infallible oracles they claim to be.

In 2023, AI “prompt engineering” was a $200,000 job. It is now obsolete. In 2024, UW-Madison student Lauren Stoneman, wrote in her school paper about the conflicting advice she was given at two separate panels on AI in academia. In the first, she was told “fairly unequivocally” that the only way to ensure job prospects in the near future would be to get a STEM or business degree.

Just three years later, computer science majors are finding it almost “impossible to get hired,” the tech industry is set on cutting jobs, not increasing them, and even the financial sector is slashing opportunities, with Wall Street reportedly culling 20,000 jobs in 2023.

That’s all to say that we don’t have to, and shouldn’t, believe everything we are told by proponents of the tech industry. The choice still rests with us as individuals. We can choose either to participate or to fight back. Though we shouldn’t hide from AI or ignore its existence, we need to approach it differently, with the full confidence that we are capable of learning and remembering and creating things on our own. That we are not dumb.

Fundamentally, AI’s “thinking” process isn’t the same as ours. It can only rearrange and analyze existing information. It is humans who have the unique ability to truly innovate and think creatively, skills that are integral, not only to our economic progress, but to envisioning our future lives.

If the goal of AI is to reduce as much friction — in our reading, in our work, in our lives — as possible, where will the serendipitous connections and innovations come from?

Well, Dear Uncle, Dear Aunt, Dear Family Friend I always forget, I believe it will come from the humanities.

Celebrating the humanities is a step towards reclaiming our agency. Appreciating the friction of ideas, moments of discomfort and not knowing, looking for innovative answers through the process of understanding history, philosophy, literature, art – this is how we reconnect with what separates our human intelligence and creativity from the artificial one. This is how we build the world we want to live in.

This is how we reclaim our humanity.

And in the near future, that might be the most important, and most marketable, skill one could possess.

—Magazine writer Aurora J. B. Sousanis can be reached at aurora.sousanis@thecrimson.com. Her column “Degrees of Separation” explores the relationship between technology and our growing self-isolation.