Seeing Your Future Self with Future You AI



How much can an AI chatbot tell you about your future based on the choices you make?



{shortcode-af09ec50775c426d035329549e9e47b5e17c0012}

{shortcode-be29865d8a9c7908fa05930b7f2d42574eaa573c}s that discouraging midterm grade in your biochemistry class deterring you from your destined path as a future doctor? Do you wish that you could talk to your future self and ask them how they (well really, you) got there?

Harvard senior Peggy Y. Yin ’25 is part of a team that designed the AI chat platform Future You to explore answers to existential questions like these. According to its developers, Future You has already been used in 175 countries by more than 50,000 users.

“Future You is a tool that allows people to explore different possibilities of their future, to find one that feels most authentic to them and most inspiring to them,” Yin says. “It’s very much a grounded intervention, and we show users different possibilities of their future, while emphasizing that this is more of a possibility space and not a prophecy space.”

Alongside MIT researchers Pat Pataranutaporn and Pattie Maes, Yin worked with UCLA professor Hal Herschfield. Herschfield’s research focuses on the psychology behind Future You: a concept called future self-continuity, which describes how connected individuals feel their present self is with their future self. When individuals feel high future self-continuity, they feel that their present self is capable of becoming the future self they want to be — think long-term visualization and affirmation. Moreover, individuals with high future self-continuity likely pay more attention to their choices, as they understand that choices they make today impact their tomorrow.

What Future You cannot — and is not meant to — do is answer questions like whether your hallway crush will become your future spouse. The platform is about possibilities to explore in your financial decisions, academic choices, health and relationships. Think “choose your own adventure” but in chatbot form, with the chatbot portraying your future self.

“We don’t have one future that you’re gonna experience,” Yin says, but Future You is“presenting one of many possibilities, and we encourage people to go back and say, ‘Well, if I change this aspect, what would happen to the simulation?’”

She continues, “We want to make people think about the consequences of their actions, and be more mindful of the choices that they are making today.”

Future You is a Large Language Model, like ChatGPT. LLMs are AI models fed with large data sets they use to train themselves, so they can respond to user requests in a human-like manner. Future You operates in two stages: the pre-survey and the actual chats.

A user’s future self is generated through the information the user provides Future You in the pre-survey. The survey’s roughly 20 questions ask the user about their current life, including their environment, relationships, and mental and physical state, as well as about how they picture their future.

For example, one question asks, “How vividly can you imagine what your family relationships will be like when you are 60 years old?” Users answer such questions on a scale from “strongly disagree” to “agree.” There are also short answer questions prompting the user to describe significant low points and turning points in their lives. To proceed, users must answer all the questions.

Data gathered in this survey then helps Future You generate a conversation with the user. Pataranutaporn calls this creation process “future self memory.” A person’s “intermediate backstory,” he says, is necessary to bridge between the user’s future self and present reality.

Future You’s target demographic is users ages 18 to 30, but not everyone in that group is excited about letting a chatbot in on their personal future aspirations. Gabriel D. Wu ’25, Director of AI Safety Student Team at Harvard says that developers have a responsibility to protect users, especially when the users are vulnerable young adults.

“I definitely think it could be a cool tool to use, and I would enjoy playing around with it,” Wu says. “Whether or not it’s a good thing for society remains to be seen.”

Safety and data privacy are priorities at Future You, according to its creators. Several places on Future You’s website including the FAQ section and Terms of Participation emphasize that all data is anonymized and used for research purposes solely.

Yin says that Future You held several workshops where experts in AI, data, philosophy, and ethics tried to break the system by purposefully prompting the model with explicit topics. Through that process, Yin says, Future You developed the safety mechanisms it has in place today. For instance, users are flagged for language indicatory of suicidal ideation, self harm or sexual content in both pre-survey and post-chats. Flagged users are then referred to resources like mental health hotlines.

As much as Future You plays up the benefits of talking to one’s future self for decision-making, its creators also acknowledge the threats that personalized AI-generated characters pose.

Pataranutaporn, who, beyond his involvement with Future You, works as a researcher specializing in cyborg psychology, is particularly concerned about the phenomenon of addictive intelligence. Pataranutaporn describes addictive intelligence as the psychological phenomenon that occurs when AI designed for optimal human engagement becomes detrimental to human relationships. Human users enraptured by AI companions might not only neglect real life relationships with other humans, but also leave interactions with their AI companion unfulfilled since AI inherently cannot completely replicate human empathy.

Every four months, the Future You team holds a special meeting aptly titled “The Future of Future You.”

This meeting, Yin and Pataranutaporn say, is where Future You’s team discusses the implications of what they have created. Could a future version of Future You be more persuasive? How could this be used to spread information, whether for beneficial or malicious purposes?

“In terms of research, I think the thing that we are excited about right now is looking at how this tool can help people imagine their career,” Pataranutaporn says. “People, especially from underrepresented groups, may not be able to see themselves doing certain types of high profile jobs, right? So we are thinking about how we can use these tools to sort of democratize imagination.”

Correction: November 22, 2024

A previous version of this article incorrectly referred to AISST as the Harvard AI Student Safety Team. In fact, it is the AI Safety Student Team at Harvard.