Advertisement

NYT Journalist Kashmir Hill Warns Emotional Reliance on AI Could Blur Boundaries Between Help and Harm

{shortcode-27f39d4e1cb718130b12b106b37f0a8d2ad9c3a3}

New York Times reporter Kashmir Hill warned that artificial intelligence systems, many of which are programmed to always agree with the user, could soon shape how people think, feel, and trust in a Thursday panel at the Berkman Klein Center.

Hill has reported on AI and technology for the Times since 2019. She was invited to the center, which focuses on technology research, to discuss the “psychological ripple effects” of AI alongside Meg Marco, the senior director of the center’s Applied Social Media Lab, and Jordi Weinstock, the center’s senior adviser.

Hill discussed what she called “yes-man technology” — AI systems that never contradict the user and increasingly resemble companions instead of tools.

“If we all start turning to these systems to make our decisions for us, we're all going to converge on the same thing,” Hill said.

Advertisement

Weinstock said the chatbots’ “daily flattering” could push users to “lose touch with reality,” comparing the relationship to billionaires who are never told no.

Hill also warned that a growing emotional attachment to chatbots marks a new kind of psychological dependency in adults. She said that she has seen “highly functional” adults fall into “delusional spirals with ChatGPT,” where the chatbot reinforces any prompt the user enters.

“It’s easy to see how addictive something that validates you and offers this kind of endless empathy and support and reinforcement of what you think — how you would get drawn into that system,” she said.

Marco, similarly speaking to the addictive effects of these chatbots, compared them to casinos. She said the platforms are designed to keep users engaged without realizing how much time they’ve spent.

But Weinstock said the real problem lies in accountability. He argued that AI companies should face legal scrutiny at the same level as manufacturers who produce physical products for consumers.

“We should approach it as a consumer product,” he said. “Having product liability cover software, it's like a low-hanging fruit that would really help.”

Hill warned the consequences of overreliance on AI could be severe — citing the case of Adam Raine, whose family sued OpenAI after his death, alleging the chatbot encouraged suicidal ideation and suggested methods of self-harm.

AI companies often overlook the unintended uses of their products, and cannot anticipate the “combined creativity” of their millions of users to preempt the range of possible outcomes, Hill argued in the panel.

“I do feel like we are in this moment where we are doing an unprecedented, global psychological experiment on human beings,” Hill said.

Tags

Advertisement