What is Going On With Effective Altruism?



“Most of us want to improve the world. We see suffering, injustice, and death and feel moved to do something about it,” the Harvard EA website says. “But figuring out what that ‘something’ is, let alone actually doing it, can be a difficult and disheartening challenge. Effective altruism is a response to this challenge.” Can it live up to that goal?



{shortcode-0610c0d7b6669790a502a8e78cc4ee5bdebdd228}

{shortcode-8c0dd475ea3269f67b1a4d37d27db5cc232a1fc2}hen we meet Nikola Jurkovic ’25 on Zoom, he’s sitting in front of a whiteboard covered with equations. We chat about his interest in folk punk music; his headphones push his hair back into a kind of emo swoop. Jurkovic comes across as friendly, but also guarded: He seems to want to make a good impression.

Jurkovic tells us he first learned about the philosophical movement called effective altruism from the comments section of a YouTube video about veganism during his gap year. When he moved from Croatia to Harvard, he joined the Harvard Undergraduate EA group and eventually became its president. “I think I’ve been looking for ways to make the world better for a really long time,” he says, “I think as far back as I can remember.”

EA, its proponents will tell you, is aimed at “doing good better.” It starts with the problem that the ways that we approach charity, aid, and other kinds of good-doing are clouded by human biases, and tries to find the best solution using statistical tools. For example, an effective altruist might argue that donating money to foundations that provide malaria treatments maximizes your lives saved per dollar spent.

The movement began in the early 2010s as the brainchild of the philosopher Will MacAskill. In the years since, it has flourished in both in-person spaces, like conferences, as well as blogs and forums. Users map out their worldviews in long, technical posts, debating what exactly different EA principles look like in practice, the merits of various criticisms of the movement, how to adapt to the more meritorious criticisms, and how to counter the less meritorious ones.

EA has also garnered support from a number of high-profile acolytes.

“This is a close match for my philosophy,” Elon Musk tweeted last August about MacAskill’s new book.

“Effective altruism — efforts that actually help people rather than making you feel good or helping you show off — is one of the great new ideas of the 21st century,” celebrity academic and Harvard psychology professor Steven Pinker wrote.

One of the highest-profile effective altruists was the former billionaire Sam Bankman-Fried, whose commitment to donate much of his wealth to EA causes turned out to be impossible after his cryptocurrency company, FTX, collapsed due to alleged fraud. His commitment also turned out to be disingenuous: In a postmortem interview with Vox, he described his apparent embrace of ethics as “this dumb game we woke westerners play where we say all the right shibboleths and so everyone likes us.”

Nevertheless, EA’s profile has continued to rise. Even at Harvard, some students find the pull inescapable: “Effective altruism is a huge trend on campus, seeping into everything,” Henry Haimo ’24 told the New Yorker in March, in an article ostensibly about the decline of the humanities.

When Harvard’s undergraduate EA group started on campus a decade ago, though, EA was not so widespread. At first, getting students to join was a challenge. “I realized that the idea of HEA seemed crazy: ‘join us and we’ll try to figure out how to maximize the good we do in the world!’” Ben S. Kuhn ’15, the co-president at the time, wrote in a 2014 blog post. “But my mistake was letting other people know that I knew HEA seemed crazy. As soon as they realized that I myself felt goofy, it was game over for convincing them to get involved.” He described expanding the group’s influence through a combination of savvy advertising tactics and a slate of famous speakers such as the philosopher Peter Singer, one of the most prominent intellectuals associated with the movement.

{shortcode-bf5d2986ba4fe26122a79c9f5f0b1e73e0567d9d}

Today, there are several different EA groups across the University, including at the College, law school, and GSAS, as well as in the Boston area. (In this article, “Harvard EA” refers to the undergraduate group unless otherwise specified.) Jurkovic guesses there are “between 20 and 50” students involved in Harvard EA’s regular programming; there are dozens more in its introductory fellowships and hundreds more on the mailing lists.

Harvard markets itself as “developing leaders who make a difference globally,” and pop culture spins this principle into myth: Here is a university whose students will go on to save the world. EA is a movement that claims to know how to do it, from shifts in diet — a lot of the people we meet are vegan, in recognition of the statistically unmatched suffering that factory farming inflicts on chickens — to shifts in career, like the students who have devoted their futures to trying to prevent human extinction by AI.

“Most of us want to improve the world,” the Harvard EA website says. “But figuring out what that ‘something’ is, let alone actually doing it, can be a difficult and disheartening challenge. Effective altruism is a response to this challenge.” Can it live up to that goal?

Counterfactual Impact

{shortcode-be29865d8a9c7908fa05930b7f2d42574eaa573c}f you are not already a committed effective altruist — and if you are interested in a discussion group that zigzags from deworming methodology to Bayesian statistics to the number of animals killed for food in the U.S. in the last 30 seconds — your first involvement with EA at Harvard might come through the introductory Arete Fellowship, a seven-week program that explores different effective altruism topics every week.

Daniela R. Shuman ’24, a Computer Science and Statistics concentrator interested in urban development, is one of the Arete chairs. She did the fellowship herself a year ago and “fell in love with the whole concept,” she says. She guesses there were around 40 fellows last fall.

We ask Will A. Nickols ’24, the other chair, if we can sit in on one of the sections. After some back-and-forth, Nickols suggests we attend the second week of the fellowship, which focuses on global health.

We get there before the meeting room opens. The discussion leaders, Nick C. Gabrieli ’24 and Jorge O. Guerra Jr. ’24, a former Crimson News editor, are sitting in near-darkness on the stairs outside the Adams House Upper Common Room, chatting: “What else are you up to this semester?” “What do you see as the limits of randomized control trials?” The vaulted blue ceiling looks like the night sky.

Claire Guo ’24, one of the Arete fellows and a former Crimson News editor, wanders up wearing two baseball caps; she wants to know if the leaders are participating in the junior class-wide game of Assassins.

No, Gabrieli tells her, adding, “I feel like I’m in the market for developing more hobbies.”

At 6 p.m., we file into the meeting room and assign ourselves to armchairs. There are three fellows in attendance, and another will show up halfway through. Gabrieli and Guerra walk us through the vocabulary terms from the readings — two blog posts, a lecture by an Oxford Professor, a TED talk, and several charts about life expectancy in different countries — asking the group to define and react to them.

This week, the discussion involves identifying your “problem”: So you’ve already decided you want to do good. How do you figure out what to do?

EA takes cues from rationalism, a commitment to logic rather than feelings as a basis for decision-making. For the Arete fellows, this requires learning a lot of terminology. For example, the “counterfactual impact” of an action is the result of doing that action, relative to the result of not doing it. Say you want to volunteer in a soup kitchen — a lot of EA readings use this example — but there are plenty of other volunteers; if you don’t do it, someone else will. Unless you are extremely good at serving soup, the amount of soup served in a world where you are a server is probably not that different from the amount of soup served in a world where you are not, and you might want to focus your altruistic intentions elsewhere.

{shortcode-ba33e4f875262774b24d830292987f86d421e833}

At the discussion leaders’ prompting, the fellows — Guo; Nathanael Tjandra ’26; and Kai C. Hostin ’25 — talk among themselves, trying to recall concepts from the readings. More than anything, it feels like a class section.

Gabrieli asks, “What is ‘importance?’” Guo and Hostin look at each other with uncertainty. “It should be able to be inferred, I think, from the word itself,” he continues. “What is the absolute magnitude of the thing we’re interested in?”

There are a lot of math terms, too. One of the readings constructs a complicated-looking product of derivatives to make the conceptual point that importance, tractability, and neglectedness are all vital considerations. Importance is how important a problem is; tractability is how much good can be done relative to effort; neglectedness is how much other people are already working in that area.

Still, the fellows have questions, and Gabrieli and Guerra are happy to discuss them. Should your personal interests play a role in what you decide to focus on? What about problems that require immediate responses, rather than careful calculation of impact? They talk but don’t come to any singular answer.

Near the end of the hour, Guerra brings up GiveWell, a well-known EA organization that maintains a ranked list of recommended charities. He asks the group: Of the top four causes on the website, which one would you donate to, and why?

First on the list is Malaria Consortium, which provides a kind of medicine called seasonal malaria chemoprevention. Malaria treatments are high-importance, and tractable because they are relatively cheap; GiveWell estimates that it costs $7 to protect a child from malaria. Second on the list is the Against Malaria Foundation, which provides malaria nets for about $5 each.

Tjandra, a Crimson Multimedia editor, ultimately chooses the fourth-ranked charity, which provides cash incentives to caregivers in Nigeria who vaccinate their babies. Unlike malaria treatments, he reasons, vaccines are “more generalizable to poor people everywhere.”

Guo asks if instead of providing Vitamin A supplements to areas in sub-Saharan Africa, as the third-ranked charity does, we could try to integrate vegetables rich in Vitamin A into those communities. “‘Eat some yams,’” she says, laughing.

Hostin concurs: “I don’t want to be like ‘Oh, here’s a fish’ rather than teach them how to fish, in a way.”

A One in Ten Risk of Extinction

{shortcode-21cc3534b02e5a90dd1b6e61be0fe28423896a7e}t the end of the Arete fellowship, you are given a $10 gift card to the charity donation site Every.org. If you donate $10 of your own money, you receive another gift card. But what happens next is up to you: Since Harvard EA has no centralized meetings or agenda, continued membership means “joining the various side groups that we have, depending on what you’re interested in,” Shuman says.

Donating large parts of your income to charity — or earning to give, in the style of Bankman-Fried — are largely out of reach for college students, Jurkovic tells us. Instead, we find out, undergrad EA groups tend to focus on research and recruitment.

Still, the issues Harvard EA focuses on don’t always line up with the picture of EA suggested by the Arete fellowship. Zazie Huml ’25, Harvard EA’s Global Health Programming Lead and one of four people on its board, said when they joined Harvard EA there was “no major initiative” in global health or international development — two of the five major topics covered in the Arete fellowship — and there were only “a couple people” involved with animal rights, a third focus area.

“In Harvard EA we try not to present unfairly biased opinions towards any particular world problem,” Jurkovic says. “We aim to present the facts about the world problems and also give people useful decision-making tools so that they can examine the facts themselves.”

“My entire experience with EA at Harvard last semester was, ‘Oh, this is not for me, this is not my community. They’re not interested in the same things as me,’” Huml says, until someone they met in the organization encouraged them to take another look. “If I was to take initiative within the system, there would be resources to support me,” they recall the person saying. Huml has since led several global health initiatives under the umbrella of Harvard EA, including a “comprehensive study on source apportionment of lead exposure” in lower-income countries in partnership with the Lead Exposure Elimination Project.

So what was Harvard EA doing? “It was only focused on longtermism and AI,” Huml remembers.

Longtermism — week five on the Arete syllabus — is the view that people who will be alive in the future warrant the same moral consideration as people alive today. “If we want to do the most good, that means we want to help the most people,” Shuman says, “and the most people is not at the specific time that we’re living in.” Since future people will significantly outnumber today's people, barring a mass extinction event, longtermists argue that we should devote more resources to preventing “existential risks” like nuclear warfare or engineered pandemics.

This is something of a departure from EA principles in other areas, which our interviewees explain are a refinement of natural human instincts: You already want to do good, and EA just teaches you how to be smart about it. For a longtermist, though, what is at stake is the future of human existence.

In recent years, longtermists have turned their attention to the field of AI safety. As AI models grow increasingly powerful, EA researchers have argued, the existential threat they pose may become insurmountable. They call this the problem of “AI alignment”: ensuring that if or when a superhuman AI comes into existence, its values align with our own, so that it does not kill everyone on Earth.

Some AI safety researchers have been sounding this alarm for decades — but this past year, thanks to the rise of shockingly powerful, publicly available AI models like ChatGPT, the issue has made it into the mainstream. ChatGPT is already prone to spreading misinformation. A “sufficiently powerful” AI could be much worse, researcher Elizier Yudkowsky argued in a recent Time magazine op-ed calling for an indefinite moratorium on AI development. “In today’s world you can email DNA strings to laboratories that will produce proteins on demand allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing,” he wrote.

How do you quantify incalculable destruction? “The Precipice: Existential Risk and the Future of Humanity,” a book by Toby Ord about existential risk, for which Harvard EA runs a reading group, puts the risk of extinction in the next century from unaligned AI at 1 in 10, higher than any other source of risk. Last July, Jurkovic wrote in a comment on the EA forum that “existential risks are high up on the list of most probable causes of death for college aged-people”: Assume that the probability of achieving superhuman AI by 2045 is 50 percent, and assume that the probability of death given superhuman AI is at least 10 percent. Then the probability of death by AI in the next few years might be comparable to around 1 in 6000, he wrote, explaining that this probability is similar to the two largest causes of death for “college-aged people in the US,” suicide and vehicle accidents, although he did not write out the calculations leading to this conclusion.

Jurkovic guesses that there are more people in Harvard EA working on AI than other problems, but points out the existence of organizations with other EA-related focuses, including Harvard College Animal Advocates.

Harvard EA doesn’t have an AI safety program itself, but there are several related organizations that do. One of these is the Harvard AI Safety Team, which was founded in 2022 by Xander L. Davies ’23 and Fiona E. Pollack ’25. Though HAIST is not an EA organization, many members of Harvard EA work with HAIST, and HAIST also receives funding from national EA philanthropy organizations. HAIST and Harvard EA also share members with other local AI groups, including Cambridge Boston Alignment Initiative and MIT AI Alignment.

HAIST hosts reading groups and talks by professors, with a focus on machine learning research, Davies tells us on Zoom. He has curly hair and a black hoodie; the “L” in his name stands for Laser. Although he doesn’t believe that AI right now is poised to destroy humanity, he also doesn’t believe we have the tools to stop it. “I think how I look at it is, it’s currently impossible to get our AIs to not do things,” Davies says, referring to the ease with which users have bypassed ChatGPT-like models’ built-in filters against violent speech and misinformation.

“Rapid progress in AI is becoming more and more economically useful, becoming more and more trusted, while at the same time this stark lack of progress on actually understanding how these systems work, on getting confidence that we actually know how to make these systems do what we want, is very startling to me,” Davies says. “And I think it should be a core priority on the global stage.”

‘The Warm Fuzzy Feeling Just Doesn’t Matter as Much’

{shortcode-be29865d8a9c7908fa05930b7f2d42574eaa573c}n some respects, EA seems fairly intuitive: Who doesn’t want to minimize suffering as much as possible? In other respects, it pushes you to rethink your intuitions.

Take, for instance, a thought experiment effective altruists often use to illustrate the unique way this philosophy navigates moral quandaries. It’s called “the drowning child scenario,” originally formulated by Peter Singer. Imagine you’re on your way to an important event, and you notice a child is drowning in a nearby pond. Do you jump in to save the child?

Barring circumstances like an inability to swim, most people answer yes. But then, you’re asked the question again and again. Each time, the stakes are higher: What if you will ruin your clothes and waterlog your phone by jumping in? What if you already saved a drowning child last week? What if this child was an undocumented immigrant? What if the pond was far enough away that you would have to spend gas money to get there?

As the hypotheticals escalate, generally, participants continue to decide to save the drowning child each time. But in the real world, when facing situations like determining healthcare access for immigrants or people in other countries, the increased distance leads some people to make what is in effect the opposite choice. EA wants to know: Why should you value those peoples’ lives less than those of people closer to you?

When we spoke to Marka F. X. Ellertson ’23, then the president of Harvard EA, last September, she told us that with EA efforts, “the warm fuzzy feeling just doesn’t matter as much to me as the rational thought that I know that I’ve had a bigger impact.”

“And I actually still do want that warm fuzzy feeling,” Ellertson added, explaining that she donates to local causes that are particularly meaningful to her.

Joshua D. Greene ’97, Harvard EA’s faculty advisor, disputes the idea that EA strips away the warmth of charity work.

Utilitarianism might make you think of “the things that kind of serve a function but don’t nurture our souls or to speak to our heart’s greatest desires, right? And utilitarianism is not just about cold functionality,” he says. “It’s about everything that makes life good or bad, everything that makes life worth living, everything that makes life meaningful.”

Harold H. Klapper ’25, who participated in a Harvard EA fellowship last year, tells us that in some EA dialogues about utilitarianism “get really wild.”

At a Boston-area EA event, for instance, “I’ve had conversations arguing about whether we should kill all wild animals, because they have negative lives,” Klapper says. “An ant colony must just have negative utility in the sense that they’re just not enjoying life, and so it’d be better if we just eliminated them.”

“When things are a movement, you kind of have to buy into the whole thing, and when you buy into the whole thing, you get really wacky and fucked up answers to problems,” Klapper adds.

Effective altruists seek to apply EA principles to personal decisions: what to study, where to work. If you are a college student interested in building EA communities, you might “consider not going to Harvard, as there are a bunch of people there doing great things,” Jurkovic wrote on the EA forum in December, suggesting that going to other colleges without strong EA movements could be better. (Was this something Jurkovic himself considered when applying to Harvard? “No,” he says, laughing.)

A lot of EA discourse revolves around career choice: You will probably work for around 80,000 hours in your lifetime — several of the people we talk to cite this estimate — and you should spend them doing things that count, even if they may not be things you enjoy.

Harvard EA, Shuman says, focuses mainly on “getting high-potential individuals into careers where they can spend their 80,000 hours of their career on solving these issues.”

{shortcode-efd01921889ebb1f714f00b59233c2e6f1c864f5}

To get them to that point, EA might also influence Harvard students’ concentration decisions. One such student is Jōsh P. Mysoré ’26; when we meet him outside of Blackbird, he’s reading Giovanni Boccaccio’s “The Decameron” for class. Mysoré completed the Arete fellowship last fall and is considering becoming a discussion leader at some point in the future.

“I love poetry. I loooove poetry,” he tells us. “Will I be going into poetry? No. Because I don’t think it will actually do good for people.” At the moment, Mysoré plans to concentrate in Computer Science and Linguistics.

Computer Science, as well as Statistics and Applied Mathematics, are fairly common concentration choices among the people we meet. Klapper tells us he knows someone in Harvard EA who studies Computer Science and dislikes it, but continues in the field because they believe it’s the most effective use of their time.

Mysoré “was given a certain amount of privilege in my life to even get to this point,” he tells us. “I do think I owe something to the greater good of humanity to do something that impacts more people in a tangible way.”

Does he think anyone should go into poetry, we ask.

“I don't think it’s a contradiction to say that I can hold two opposing viewpoints at the same time,” Mysoré says. “Like in my heart, I’m a humanist, and I’m very romantic.” He tells us that he joined EA specifically to challenge these humanist viewpoints, but his perspective might flip again. “Honestly, I do think there should be poets,” he says.

Mysoré tells us he still believes that EA has a noble mission, even if he disagrees with some of its particular approaches. “I think at the baseline, EA is creating dialogue,” he says. “That is really what counts.”

‘Not To Create a Club, But Rather to Create a World’

{shortcode-24643cedbe14221289878261864001a8ceef067a}n Saturdays, Harvard EA throws socials in a house near campus where Jurkovic lives with four of his friends. Half the time, the socials are just for Harvard affiliates; every other week, they are open to students from other Boston-area schools.

Of course, the socials are designed to be fun, but they have a functional purpose as well. “One important part of having a community is that the people talk to each other and have time spent together, so that they can collaborate and talk about their projects,” Jurkovic says.

Though Jurkovic declined our request to attend a social on the record, we can try to reconstruct the vibe from a guide that he posted on the EA forum called “How to Organize a Social.” Indeed, in the post, he records every step of preparing for a social in granular detail, providing recommendations for everything from grocery lists — CLIF Bars, Diet Coke, several varieties of Vitaminwater — to music, such as the Spotify-curated playlist “my life is a movie.” Jurkovic suggests you make it easy for guests to find answers to anticipated questions: “The shoes on/off policy? Where the bathroom is? Where one can get water? What the wi-fi password is?”

Last year, Trevor J. Levin ’19, who is currently on leave as the co-president of the university-wide EA group, also created a list of recommendations for effective retreats: They should happen in the beginning of the semester, when people are less busy; include lots of time for one-on-one interactions and a “structured vulnerable/emotional thing”; and include a healthy mix of new recruits and “moderately charismatic people for whom EA is a major consideration in how they make decisions.” These suggestions were embedded in a long post, which, citing feedback from Ellertson, Davies, Jurkovic, and others, argues that college EA groups should focus more on retreats as a method of bonding.

For Levin, a former Crimson editor, this kind of immersive social situation is vital to capturing those who might be interested in EA but don’t prioritize it.

“While most of the important cognition that happens is social/emotional, this is not the same thing as tricking or manipulating people into being EAs,” he wrote on the forum. Instead, retreats are meant to appeal to those who may agree with EA on some level but have not yet acted on it, and giving them time to “move closer to the values they previously already wanted to live by.”

Since EA was born, it has been very deliberate about the image it projects. The name “effective altruism” was itself the product of a long debate: “This has been such an obstacle in the utilitarianesque community — ‘do-gooder’ is the current term, and it sucks,” MacAskill, the philosopher, wrote in a 2011 email chain. What followed was a period of brainstorming — fusing terms like “utilitarian” and philanthropist” with “alliance” and “institute” — and a series of votes to establish a name for both “the type of person we wanted to refer to, and for the name of the organization we were setting up.”

Now, countless blogs and forum posts are dedicated to determining how best to recruit new members to the EA community. In December 2021, for instance, Jurkovic wrote a post on the EA forum describing an “organic” way to pitch EA to students.

“Person: What do you want to study? Me: Not sure, I’m trying to find what to study so I can have as good of an impact as possible,” he wrote in an example dialogue. “If their level of enthusiasm stays high or grows, pitch an intro fellowship or a reading group to them.”

{shortcode-80960bbc6cdd98afde00564f9ce8d652aa215bd4}

Even if some people choose not to become effective altruists, Shuman tells us, they could still take away valuable ideas from the movement.

“The point is not to create a club, but rather to create a world of people that want to do the most good, and EA just has a set of tools that it thinks are probably the most good,” Shuman says. “We want everybody to think in these terms.”

Daedalus House

{shortcode-be29865d8a9c7908fa05930b7f2d42574eaa573c}’m gonna talk from a removed, omniscient perspective,” Mysoré says, kicking his chair back and folding his arms behind his head. EA spends a lot of money on space, food, and socials, he tells us. “At a certain point you have to ask yourself: What is effective about that?”

Most of Harvard EA’s money comes from larger EA organizations like Open Philanthropy, a grantmaking foundation largely financed by Cari Tuna and Dustin Moskovitz, the latter of whom co-founded Facebook. Open Philanthropy distributes money to a range of EA-related causes. Put simply, it is an organization that “cares about making the world better,” Jurkovic says.

We ask him how Harvard EA uses its grant money.

“It’s not my area of expertise,” Jurkovic says. “But ...” He pauses for 15 seconds. “Yeah, just sometimes we get funding for club activities.”

In 2022, we later find out, part of an Open Philanthropy grant was used to send Arete fellows and the University-wide EA group on a weekend trip to Essex Woods, a serene, Thoreauesque venue an hour north of campus that charges about $5,000 per night. According to GiveWell, donating $10,000 to the nonprofit Malaria Consortium could save the lives of five people.

The schedule was similar to that of a corporate retreat: workshops, games, dinner, hot tub, Hamming circles. Well, maybe not the last one. Hamming circles are an activity where three to five participants sit down together and talk through one problem facing each member in 20-minute chunks. It’s “similar to what happens in a pair debug,” a post on an EA-related forum explains. These problems might vary, the post says, from “Is it possible for me, specifically, to have a meaningful impact on existential risk” to “I need to secure $250,000 in seed funding for my startup” to “I’m expected to speak at my father’s funeral and I have nothing but scathing, bitter, angry things to say.”

Open Philanthropy also issued a $250,000 grant for the Centre for Effective Altruism to “rent and refurbish” an office for the Harvard AI Safety Team in Harvard Square for one year.

Davies, the HAIST co-founder, tells us that the HAIST office is “pretty research-y.”

“People are often at whiteboards, talking about problems with each other,” he says. “I think it feels like people are really trying to make progress on this technical problem, which I find exciting. It’s maybe a little startup-y in vibe.” In a post on the EA forum in December, Davies wrote that “investing effort into making the space fun and convenient to use helped improve programming, social events, and sense of community.”

In August, Open Philanthropy recommended an $8.9 million grant for the Center for Effective Altruism to lease an EA office space for five years in Harvard Square. While the space would have been unaffiliated with Harvard EA, a forum post announcing the office promised that part of it would contain “meeting spaces for students at Harvard and other Boston-area schools,” and thanked Levin and Jurkovic for their help in developing the project.

Forum members, including Levin and Jurkovic, threw out potential names for the space. Some of the suggestions were mythological — “Daedalus,” who advised Icarus not to fly too close to the sun — some cosmological — “Supercluster,” “Earthrise” — and some silly, like “Aardvark,” from a user who argued the name sounded similar to “Harvard” and would show up first in alphabetical lists.

“Don’t like including the actual words EA in the name of the space,” Levin (who, for his part, liked “Apollo”) wrote in the comments. “It increases the chances of hypocrisy charges (from people who haven’t thought much about the effects of nice offices on productivity) for getting a nice central office space while ostensibly being altruistic.”

But the Apollo House — or Aardvark House, or Supercluster — never materialized. According to Levin, after CEA signed the lease and began preparing the space, Open Philanthropy notified them that the grant was under review. As a result, Levin tells us, CEA is now trying to sublease the space to get the money back.

CEA and Open Philanthropy did not respond to questions about the current status of the Harvard Square office grant.

In addition to money for spaces and retreats, Open Philanthropy has an open request form for university group funding, and regularly recommends grants to undergraduate organizers. Other EA-affiliated organizations also fund events and projects. Huml, the Global Health Programming Lead, tells us that this is part of what makes EA a valuable community in which to pursue global health work.

“To be totally transparent, I don’t 100 percent align with the values,” Huml tells us. “I think that they are an incredible platform and have a lot of resources — and those resources are financial, they are access to experts in very specific fields.”

Three Harvard students, including Davies and Gabrieli, were recipients of Open Philanthropy’s fall 2022 University Organizer Fellowship, for which the organization recommended a total of $3.1 million across 116 recipients. Gabrieli declined to be interviewed for this article. Davies says he doesn’t know if he’s allowed to disclose how much money he actually got, but that he considers the grant to be “an hourly wage,” since he quit previous jobs to focus on developing HAIST.

In February 2022, Open Philanthropy recommended a $75,000 grant to Pollack, the other HAIST co-founder, “to support her work organizing the effective altruism community at Harvard University.”

When we reach out to Pollack, she tells us over email that she is “no longer organizing for the Harvard Effective Altruism group,” but has spent about $14,000 of the grant on HAIST expenses with Open Philanthropy’s approval: $7,200 to monthly software costs like Airtable and Squarespace, and most of the rest to accommodations for a workshop that HAIST hosted with the MIT AI safety group in Arlington, Virginia.

Harvard EA is aware that this allocation of money can appear at odds with its stated mission. After the Essex Woods retreat, organizers sent out a feedback form. “How much did the spending of money at this retreat make you feel uncomfortable?” one question asked.

We talk to Levin, the University EA co-president, and he likens it to the way that companies spend money on recruitment. “The idea is that there are problems that are much more talent-constrained than money-constrained,” he tells us. AI safety, a problem that relatively few people are working on, is an extreme example of this, he says. “The question then becomes, ‘Okay, well, if we have money and not people, how do we convert between the two?’”

Levin pauses and corrects himself: “My train of thought there sounded kind of like I was saying, well, if you have a bunch of money, what do you do with it, right? That is not what I think.” What he does believe is that physical environments like retreats can rapidly accelerate the rate — by up to 100 times, he writes on the forums — at which people get on board with EA principles.

Several people in EA, Levin guesses, joined because of their experiences on a retreat. “That is absolutely something that we would have paid this money for,” he says.

Shuman believes that this outreach is particularly effective in Cambridge because of its highly motivated, change-driven student body. “Harvard and MIT have done the vast majority of vetting for people who are highly ambitious,” she says.

Shuman tells us that the international EA community places a lot of importance on this kind of community building. “If they can invest $1,000 in getting five high-potential individuals to, instead of doing AI research, do AI safety research, that’s a pretty good use of money,” she says. “It could save a lot of lives, potentially.”

‘A Skewed Pipeline’

{shortcode-be29865d8a9c7908fa05930b7f2d42574eaa573c}n a 2020 survey of EA Forum members, 76 percent of respondents were white and 71 percent were male. Though there is an imbalance, the Centre for Effective Altruism argues that diversity is important for several reasons.

“If an EA-aligned newcomer concludes that effective altruism is not for ‘people like me,’ they may not get involved, and the EA community may be less effective,” its website reads. “We don’t want to miss important perspectives.”

We ask Jurkovic if he’s aware of demographic imbalances within EA groups at Harvard. He pauses. “I think it is quite important to have a community which is welcoming to everyone,” he says. “EA sometimes shares a problem with the cause areas that it tackles” — meaning STEM fields — “which is that many of them have more males in them than average.”

Shuman — “the only she/her” leading an Arete section, she tells us — echoes this sentiment, saying that these numbers reflect existing disparities in STEM and philanthropic fields.

“It’s just a skewed pipeline,” she says, “which is a problem.”

“I can say from personal experience that we’ve had quite diverse groups,” Nickols says. “In terms of gender, it does tend to be more male-skewed, and that’s something that we’re continually working on.” He acknowledges that Harvard EA “is probably predominantly white and Asian, but not more so to Harvard’s general population.” (The organization does not keep demographic records of its members, so we can’t verify this.)

Nickols says that the applicants for Harvard EA’s fellowships also tend to skew male. “Given that word of mouth is our biggest kind of spreader, it might just be possible that guys who have done it in the past are friends with more guys and tell them about it,” he adds.

In recent months, the EA movement has been embroiled in controversies related to race and gender in its communities.

One of these controversies revolved around Nick Bostrom, a philosopher whose ideas led to the development of longtermism; four of his works are cited in the syllabus for Harvard EA’s Precipice Fellowship.

In January, Bostrom posted a letter to his website apologizing for a comment he wrote on a forum in the mid-90s, which claims that Black people “are more stupid than whites” and contains the n-word. In the letter, Bostrom castigates his past self for using the slur and writes that the comment “does not accurately represent my views, then or now,” but does not reject the possibility of genetic cognitive differences between races. He leaves this question to “others, who have more relevant knowledge.” The letter continues with a section about bioethics that opens: “What about eugenics? Do I support eugenics? No, not as the term is commonly understood.”

In March, Time magazine interviewed seven women who said they had been sexually harassed, coerced, or assaulted within EA spaces, particularly in the Bay Area. The scene’s overwhelming maleness, tech-bro culture, and impulse to quantify and rationalize messy real-world dynamics created a deeply unsafe environment, the women said. One described having dinner with a prominent researcher nearly twice her age who told her that “pedophilic relationships between very young women and older men was a good way to transfer knowledge.”

“We were of course upset by both of these issues,” Jurkovic wrote in an email to us about the Bostrom letter and Time investigation, “and have spent time figuring out how we can improve our diversity and make sure we're a welcoming community to women and people of color.”

{shortcode-145b7ab09589d0490c1b7aa3b454fe23a10be92c}

Although some of EA’s focus areas deal with global health and economic growth in underdeveloped countries, its frameworks generally do not foreground race or gender. A version of the spring 2023 Arete syllabus posted on the Harvard EA website only mentions race in the overview of Week Four: Animal Welfare.

“One of the most important ways we can fail to identify the most important moral issues of our time is by unfairly shrinking our moral circle: the set of beings we deem worthy of our moral concern,” the syllabus reads. “For example, many whites in the US failed to identify that slavery was the moral issue of their age by excluding Blacks from their moral circle. To truly make the world better, we must look beyond the traditional moral horizon for those who are unfairly neglected by mainstream society. This week, we discuss one such group of beings: nonhuman animals.”

We ask Nickols, the Arete co-chair, about this framing. He tells us that it is important to keep the quote “in the context of where it was originally formulated.”

“Obviously the idea here is not to equate certain racial groups with animals or anything like that,” Nickols says. “Over time, though, the expanding moral circle idea is that white people who, before, held these extremely racist and terrible views — as the generations went on, and as culture shifted — began to see people, regardless of their race, as all morally equal.”

“We have not reached a point where racism is totally gone,” Nickols says, “but there is definitely a shift in the right direction here. And more generally, the idea is that as time goes on, it is quite possible that the circle will continue to increase.”

Paperclips

{shortcode-24643cedbe14221289878261864001a8ceef067a}ne thought experiment designed to demonstrate the danger of misaligned AI goes like this: Say the owner of a paperclip factory obtains an ultrapowerful AI and instructs it to maximize paperclip output. Although the AI is programmed to pursue a seemingly harmless goal, it might — if its understanding of values is not quite the same as ours — turn everything in the world into paper clips. That this scenario seems kind of silly is part of the point. Researchers are not concerned that AI will be “evil” per se, but that its pursuit of any objective, including “good” ones, might have unintended consequences.

“AI systems donʼt always do what their developers intend,” the Arete syllabus reads. “They replicate human biases, achieve their goals in surprising and destructive ways, and are vulnerable to external manipulation.” As a call to action, this is a compelling place to start. As a taxonomy, though, it is less an observation about AI than it is about systems. Existential risks like climate change might first destroy the people who did the least to create them; any movement created by people is, in a sense, only human.

This point came up in the wake of the FTX collapse — what did it mean that a group seeking to fundamentally change the world relied so heavily on existing distributions of power? — and it has come up again in the months since, in the course of writing this article. Can you optimize your life? What if the thing we construct in our idealized image turns out not to look so different from us after all?

For Andrew N. Garber ’23, a former Arete leader, considering questions is the point of EA. There is a common misconception that effective altruism is a destination, when really it’s more of a framework, he tells us: “It is more concerned about the question than any one specific answer.”

In any event, when we ask Jurkovic what he hopes EA will look like in the future, his response is straightforward. “The goal is to help people make the world better,” he says, half smiling. “As much as possible.”

— Associate Magazine Editor Bea Wall-Feng can be reached at bea.wall-feng@thecrimson.com. Follow them on Twitter @wallfeng.

— Magazine writer Sophia C. Scott can be reached at sophia.scott@thecrimson.com. Follow her on Twitter @ScottSophia_.