How effective altruism let Sam Bankman-Fried happen

Amanda Northrop/Vox

Profound philosophical errors enabled the FTX collapse.

I have a lot of reasons to be furious at Sam Bankman-Fried. His extreme mismanagement of FTX (which his successor John J. Ray III, who previously helped clean up the Enron debacle, described as the worst he’s ever seen) led to the sudden collapse of a $32 billion financial company. He lost at least $1 billion in client funds after surreptitiously transferring it to a hedge fund he also owned, potentially in an effort to make up for huge losses there. His historic management failures pulled the rug out from under his users, his staff, and the many charities he promised to fund. He hurt many, many, many people.

But for me, the most disturbing aspect of the Bankman-Fried saga, the one that kept me up at night, is how much of myself I see in him.

Like me, Bankman-Fried (“SBF” to aficionados) grew up in a college town surrounded by left-leaning intellectuals, including both of his parents. So did his business partner and Alameda Research CEO Caroline Ellison, the child of MIT professors. Like me, they were both drawn to utilitarian philosophy at a young age. Like me, they seemed fascinated by what their privileged position on this planet would enable them to do to help others, and embraced the effective altruism movement as a result. And the choices they made because of this latter deliberation would prove disastrous.

Something went badly wrong here, and my fellow journalists in the take mines have been producing a small library of theories of why. Maybe it was SBF and Ellison’s choice to earn to give, to try to make as much money as possible so they could give it away. Maybe the problem was that they averted their gaze from global poverty to more “longtermist” causes. Maybe the issue is that they were not giving away their money sufficiently democratically. Maybe the problem was a theory of change that involved billionaires at all.

It took me a while to think through what happened. I thought Bankman-Fried was going to commit billions toward tremendously beneficial causes, a development I chronicled in a long piece earlier this year on how EA was coping with its sudden influx of billions. The revelation that his empire was a house of cards was shattering, and for weeks I was too angry, bitter, and deeply depressed to say much of anything about it (much to the impatience of my editor).

There’s still plenty we don’t know, but based on what we do know, I don’t think the problem was earning to give, or billionaire money, or longtermism per se. But the problem does lie in the culture of effective altruism. SBF was an inexperienced 25-year-old hedge fund founder who wound up, unsurprisingly, hurting millions of people due to his profound failures of judgment when that hedge fund grew into something enormous — failures that can be laid in part at the feet of EA.

Despite everything, this isn’t a time to give up on effective altruism

For as much good as I see in that movement, it’s also become apparent that it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and that it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse. EA needs much stronger guardrails to prevent another figure like Bankman-Fried from emerging — and to prevent its tenets from becoming little more than justifications for malfeasance.

Despite everything that’s happened, this isn’t a time to give up on effective altruism. EA has quite literally saved lives, and its critique of mainstream philanthropy and politics is still compelling. But it needs to change itself to keep changing the world for the better.

How crypto bucks swept up EA — and us?

First, a disclosure: This August, Future Perfect — the section of Vox you’re currently reading — was awarded a $200,000 grant from Bankman-Fried’s family foundation. The grant was for a reporting project in 2023, which is now on pause. (I should be clear that, under the terms of the grant from SBF’s foundation, Future Perfect has ownership of its content and retains editorial independence, as is standard practice for all of our grants.)

We’re currently having internal discussions about the future of the grant, mainly around the core question: What’s the best way to do good with it? It’s more complicated than just giving it back, not least because it’s hard to be sure where the money will go — will it go toward making victims whole, for instance?

Obviously, knowing what we know now, I wish we hadn’t taken the money. It proved the worst of both worlds: It didn’t actually help our reporting at all, and it put our reputation at risk.

But the honest answer to whether I regret taking the money knowing what we knew then, the answer is no. Journalism, as an industry, is struggling badly. Employment in US newsrooms fell by 26 percent from 2008 to 2020, and this fall has seen another end-of-year wave in media layoffs. Digital advertising has not made up for the collapse of print ads and subscriptions, and digital subscription models have proven hit or miss. Vox is no different from other news organizations in our need to find sources of revenue. Based on what we knew at the time, there was also little reason to believe Bankman-Fried’s money was ill-gotten.

(This is also as good a place as any to clear the air about Future Perfect’s mission. We have always described Future Perfect as “inspired by” effective altruism — meaning that it’s not part of the movement but informed by its underlying philosophy. I’m an EA, but my editor is not; indeed, the majority of our staff aren’t EAs at all. What unites us is the mission of using EA as a lens, prizing importance, tractability, and neglectedness, to cover the world — something that leads to a set of coverage priorities and ideas that we believe are woefully underrepresented in the media.)

In the aftermath of the FTX crash, a common criticism I’ve gotten via email and Twitter is that I, and other EAs, should have known this guy was sketchy. And in some sense, the sense in which crypto as a whole is a kind of royal scam without much of a use case beyond paying for drugs, we all knew he was. I said as much on this website.

I think crypto is stupid. Millions apparently disagreed.

But while I think crypto is stupid, millions apparently disagreed, and wanted places to trade it, which is why the stated business activities of Alameda and FTX made sense as things that would be immensely profitable in a normal, legal sense. Certain aspects of FTX’s operations did seem a bit noxious, particularly as its advertising and publicity campaigns ramped up. “I’m in on crypto because I want to make the biggest global impact for good,” read an ad FTX placed in magazines like the New Yorker and Vogue, featuring photos of Bankman-Fried (other ads in the same campaign featured model Gisele Bündchen, one of many celebrities who endorsed the platform). As I said in August, “buying up Super Bowl ads and Vogue spreads with Gisele Bündchen to encourage ordinary people to put their money into this pile of mathematically complex garbage is … actually morally questionable.”

I stand by that. I also stand by the idea that what the money was meant to do matters. In the case of the Bankman-Fried foundations, it was for funding coverage and political action around improving the long-term trajectory of humanity. It seemed like a worthwhile topic before FTX’s collapse — and it still is.

The problem isn’t longtermism …

Ah, yes: the long-term trajectory of humanity, the trillions upon trillions of beings who could one day exist, dependent on our actions today. It’s an impossible concept to express without sounding unbelievably pretentious, but it’s become a growing focus of effective altruism in recent years.

Many of the movement’s leaders, most notably Oxford moral philosopher Will MacAskill, have embraced an argument that because so many more humans and other intelligent beings could live in the future than live today, the most important thing for altruistic people to do in the present is to promote the welfare of those unborn beings, by ensuring that future comes to be by preventing existential risks — and that such a future is as good as possible.

MacAskill’s book on this topic What We Owe the Future received one of the biggest receptions of any philosophy monograph in recent memory, and both it and his more technical work with fellow Oxford philosopher Hilary Greaves make pointed, highly contestable claims about how to weigh future people against people alive today.

But the theoretical debate obscures what funding “longtermist” causes means in practice. One of the biggest shortcomings of MacAskill’s book, in my view, is that it failed to lay out what “making the future go as well as possible” involves in practice and policy. The most specific it got was in advocating measures to prevent human extinction or a catastrophic collapse in human society.

Unless you are a member of the Voluntary Human Extinction movement, you’ll probably agree that human extinction is indeed bad. And you don’t need to rely on the moral math of longtermism at all to think so.

If one goes through the “longtermist” causes funded by Bankman-Fried’s now-defunct charitable enterprises and by the Open Philanthropy Project (the EA-aligned charitable group funded by billionaires Cari Tuna and Dustin Moskovitz), the money is overwhelmingly dedicated to efforts to prevent specific threats that could theoretically kill billions of humans. Before the collapse of FTX, Bankman-Fried put millions into scientists, companies, and nonprofits working on pandemic and bioterror prevention and risks from artificial intelligence.

You’ll probably agree that human extinction is indeed bad

It’s fair and necessary to dispute the empirical assumptions behind those investments. But the core theory that we are in an unprecedented age of existential risk and that humans must responsibly regulate technologies that are powerful enough to destroy ourselves is very reasonable. While critics often charge that longtermism takes away resources from more pressing present problems like climate change, the reality is that pandemic prevention is, bafflingly, underfunded, explicitly compared to climate change and especially compared to the seriousness of the threat, and longtermists were trying to do something about it.

Sam’s brother and main political deputy Gabe Bankman-Fried was investing serious capital into a strategy to force an evidently unwilling Congress to appropriate the tens of billions of dollars annually needed to make sure nothing like Covid happens again. Mainstream funders like the MacArthur Foundation had pulled out of nuclear security programs, even as the war in Ukraine made an exchange likelier than it had been in decades, but Bankman-Fried and groups he supported were eager to fill the gap.

I have a hard time looking at those funding decisions and concluding that’s where things went wrong.

… the problem is the dominance of philosophy

Even before the fall of FTX, longtermism was creating a notable backlash as the “parlor philosophy of choice among the Silicon Valley jet-pack set,” in the words of the New Republic’s Alexander Zaitchik. Some EAs like to harp on mischaracterizations by longtermism’s critics, blaming them for making the concept seem bizarre.

That might be comforting, but it’s mistaken. Longtermism seems weird not because of its critics but because of its proponents: it’s expressed mainly by philosophers, and there are strong incentives in academic philosophy to carry out thought experiments to increasingly bizarre (and thus more interesting) conclusions.

This means that longtermism as a concept has been defined not by run-of-the-mill stuff like donating to nuclear nonproliferation groups, but by the philosophical writings of figures like Nick Bostrom, MacAskill, Greaves, and Nick Beckstead, figures who have risen to prominence in part because of their willingness to expound on extreme ideas.

These are all smart people, but they are philosophers, which means their entire job is to test out theories and frameworks for understanding the world, and try to sort through what those theories and frameworks imply. There are professional incentives to defend surprising or counterintuitive positions, to poke at widely held pieties and components of “common sense morality,” and to develop thought experiments that are memorable and powerful (and because of that, pretty weird).

This isn’t a knock on philosophy; it’s what I studied in college and a field from which I have learned a tremendous amount. It’s good for society to have a space for people to test out strange and surprising concepts. But whatever the boundary-pushing concepts being explored, it’s important not to mistake that exploration for practical decision-making.

When Bostrom writes a philosophy article for a philosophy journal arguing that total utilitarians (who think one should maximize the total sum of happiness in the world) should prioritize colonizing the galaxy, that should not, and cannot, be read as a real policy proposal, not least because “colonizing the galaxy” probably is not even a thing humans can do in the next thousand years. The value in that paper is exploring the implications of a particular philosophical system, one that very well might be badly wrong. It sounds science fictional because it is, in fact, science fiction, in the ways that thought experiments in philosophy are often science fiction.

It sounds science fictional because it is, in fact, science fiction

The dominance of academic philosophers in EA, and those philosophers’ increasing attempts to apply these kinds of thought experiments to real life — aided and abetted by the sudden burst of billions into EA, due in large part to figures like Bankman-Fried — has eroded the boundary between this kind of philosophizing and real-world decision-making. Poets, as Percy Shelley wrote, may be the unacknowledged legislators of the world, but EA made the mistake of trying to turn philosophers into the actual legislators of the future. A good start would be more clearly stating that funding priorities, for now, are less “longtermist” in this galaxy-brained Bostrom sense and more about fighting specific existential risks — which is exactly what EA funders are doing in most cases. The philosophers can trod the cosmos, but the funders and advocates should be tethered closer to Earth.

The problem isn’t billionaires’ billions …

Second only to complaints about longtermism in the corpus of anti-effective altruist writing are complaints that EA is inherently plutocratic. Effective altruism began with the group Giving What We Can, which asked members (including me) to promise to give 10 percent of their income to effective charities for the rest of our lives.

This, to critics, equates “doing good” with “giving money to charity.” The problem only grew when the donor base was no longer individuals making five or six figures and donating 10 percent, but literal billionaires. Not only that, but those billionaires (including Bankman-Fried but also Tuna and Moskovitz) became increasingly interested in investing in political change through advocacy and campaigns.

Longtermist goals, even less cosmic ones like preventing pandemics, require political action. You can’t stop the next Covid or prevent the rise of the robots with all the donated anti-malaria bednets in the world. You need policy. But is that not anti-democratic, to allow a few rich people to try to influence the whole political system with their fortunes?

It’s definitely anti-democratic, but not unlike democracy itself, it’s also the best of a few rotten options. The fact of the matter is that, in the United States in the 21st century, the alternative to a politics that largely relies on benevolent billionaires and millionaires is not a surge in working-class power. The alternative is a total victory for the status quo.

Suppose you live in the US and would like to change something about the way our society is organized. This is your first mistake: You want change. The US political system is organized in such a way as to produce enormous status quo bias. But maybe you’re lucky and the change you want is in the interest of a powerful corporate lobby, like easing the rules around oil drilling. Then corporations who would benefit might give you money — and quite a lot of it — to lobby for it.

What if you want to pass a law that doesn’t help any major corporate constituency? Which is, y’know, most good ideas for laws? Then your options are very limited. You can try to start a major membership association like the AARP, where small contributions from members of the groups fund the bulk of their activities. This is much easier said than done. Groups like this have been on the decline for decades, and major new membership groups like Indivisible tend to get most of their money from sources other than their members.

It’s definitely anti-democratic, but not unlike democracy itself, it’s also the best of a few rotten options

What sources, then? There’s unions — or perhaps more accurately, there were unions. In 1983, 20.1 percent of American workers were in a union. In 2021, the number was 10.3 percent. A measly 6.1 percent of private sector workers were unionized. The share just keeps falling and falling, and while some smart people have ideas to reverse it, those ideas require government actions that would probably require plenty of lobbying to reach fruition, and who exactly is going to fund that? Unions can barely keep themselves afloat, much less fund extensive advocacy outside their core functions. The Economic Policy Institute, long the most influential union-aligned think tank in the US, took only 14 percent of its funding from unions in 2021.

So the answer to “who funds you” if you are doing advocacy or lobbying and do not work for a major corporation is usually “foundations.” And by “foundations,” I mean “millionaires and billionaires.” There’s no small irony in the fact that causes from expanded social safety net programs to increased access to health insurance to higher taxes on rich people are primarily funded these days by rich people and their estates.

It’s one of history’s strangest twists that Henry Ford, possibly the second most influential antisemite of the 20th century, wound up endowing a foundation that funded the creation of progressive groups like the Natural Resources Defense Council and the Mexican American Legal Defense and Educational Fund. But it happened, and it happens much more than you’d think. US history is littered with progressive social movements that depended on wealthy benefactors: Abolitionists depended on donors like Gerrit Smith, the richest man in New York who bankrolled the Liberty and Republican parties as well as John Brown’s uprising in Harpers Ferry; Brown v. Board of Education was the result of a decades-long strategy of the NAACP Legal Defense Fund, a fund created due to the intervention of the Garland Fund, a philanthropy bankrolled by an heir of a senior executive of what’s now Citibank.

Is this arrangement ideal? Of course not. Scholar Megan Ming Francis has recently argued that even the Garland Fund provides an example of wealthy donors perverting the goals of social movements. She contends it pushed the NAACP away from a strategy focused on fighting lynching toward one focused on school desegregation. That won Brown, but it also undercut goals that were, at the time, more important to Black activists.

These are important limitations to keep in mind. At the same time, would I have preferred the Garland Fund not invest in Black liberation at all? Of course not.

This, essentially, is why I find the use of SBF to reject billionaire philanthropy in general unpersuasive. It is completely intellectually consistent to decide that accepting funding from wealthy, potentially corrupt sources is unacceptable, and that it is okay, as would inevitably follow, if this kind of unilateral disarmament materially hurts the causes you care about. It’s intellectually consistent, but it means accepting defeat on everything from higher taxes on the rich to civil rights to pandemic prevention.

… it’s the porous boundaries between the billionaires and their giving

There’s a fundamental difference between Bankman-Fried’s charitable efforts and august ones like the Rockefeller and Ford foundations: these philanthropies are, fundamentally, professional. They’re well-staffed, normally run institutions. They have HR departments and comms teams and accountants and all the other stuff you have when you’re a grown-up running a grown-up organization.

There are disadvantages to being normal (groupthink, excessive conformity) but profound advantages, too. All these normal practices emerged for a reason: They were added to institutions over time to solve problems that reliably come up when you don’t have them.

The Bankman-Fried empire was not normal in any way

The Bankman-Fried empire was not normal in any way. For one thing, it had already sprawled into a bevy of different institutions in the very short time it existed. The most public-facing group was the FTX Future Fund, but there was also Building a Stronger Future, a funder sometimes described as a “family foundation” for the Bankman-Frieds. (That’s the one that awarded the grant to Future Perfect.) There was also Guarding Against Pandemics, a lobbying group run by Gabe Bankman-Fried and funded by Sam.

The deeper problem, behind these operational hiccups, is that in lieu of a clear, hierarchical decision-making structure for deciding where Bankman-Fried’s fortune went, there was nothing separating charitable decision-making from Bankman-Fried individually as a person. I never met SBF in person or talked to him one on one — but on a couple occasions, members of his charity or political networks pitched me ideas and CC’d Sam. This is not, I promise you, how most foundations operate.

Bankman-Fried’s operations were deeply incestuous, in a way that has had profoundly negative consequences for the causes that he professed to care about. If Bankman-Fried had given his fortune to an outside foundation with which he and his family had limited involvement, his downfall would not have tainted, say, pandemic prevention groups doing valuable work. But because he put so little distance between himself and the causes he supported, dozens of worthwhile organizations with no involvement in his crimes find themselves not only deprived of funding but with serious reputational damage.

The good news for EAs is that Open Philanthropy, the remaining major EA-aligned funding group, is a much more normal organization. Its form of professionalization is something for the rest of the movement to emulate.

The problem is utilitarianism free from any guardrails …

Sam Bankman-Fried is a hardcore, pure, uncut Benthamite utilitarian. His mother, Barbara Fried, is an influential philosopher known for her arguments that consequentialist moral theories like utilitarianism that focus on the actual results of individual actions are better suited for the difficult real-world trade-offs one faces in a complex society. Her son apparently took that insight very, very seriously.

Effective altruists aren’t all utilitarians, but the core idea of EA — that you should attempt to act in such a way to promote the greatest human and animal happiness and flourishing achievable — is shot through with consequentialist reasoning. The whole project of trying to do the most good you can implies maximizing, and maximizing of “the good,” and that is the literal definition of consequentialism.

It’s not hard to see the problem here: If you’re intent on maximizing the good, you better know what the good is — and that isn’t easy. “​​EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA,” Holden Karnofsky, the co-CEO of Open Philanthropy and a leading figure in the development of effective altruism, wrote in September. “By default, that seems like a recipe for trouble.”

Indeed it was. It looks increasingly likely that Sam Bankman-Fried appears to have engaged in extreme misconduct precisely because he believed in utilitarianism and effective altruism, and that his mostly EA-affiliated colleagues at FTX and Alameda Research went along with the plan for the same reasons.

If the conclusions are ugly enough, you should just junk the theory

When he was an undergrad at MIT, Bankman-Fried was reportedly planning to work on animal welfare issues until a pivotal conversation with Will MacAskill, who told him that because of his mathematical prowess, he might be able to do more good by working as a “quant” in the finance sector and donating his healthy earnings to effective charities than he ever could giving out flyers promoting veganism.

This idea, known as “earning to give,” was one of the first distinctive contributions of effective altruism as a movement, specifically of the group 80,000 Hours, and I think taking a high-earning job with the explicit aim of donating the money still makes a lot of sense for most big-money options.

But what SBF did was not just quantitatively but qualitatively different from classic “earn to give.” You can make seven figures a year as a trader in a hedge fund, but unless you manage the whole fund, you probably won’t become a billionaire. Bankman-Fried very much wanted to be a billionaire — so he could have more resources to devote to EA giving, if we take him at his word — and to do that, he set up whole new corporations that never would’ve existed without him. Those corporations then engaged in incredibly risky business practices that never would’ve occurred if he and his team hadn’t entered the field. He was not one-for-one replacing another finance bro who would have used the earnings on sushi and strippers rather than altruistic causes. He was building a whole new financial world, with consequences that would be much grander in scale.

And in building this world, he acted like a vulgar utilitarian. Philosophers like to talk about “biting the bullet”: accepting an unsavory implication of a theory you’ve adopted, and arguing that this implication really isn’t that bad. Every moral theory has bullets to bite; Kant, who believed morality was less about good consequences than about treating humans as ends in themselves, famously argued that it is never acceptable to lie. That leads to freshman seminar-level questions about whether it’s okay to lie to the Gestapo about the Jewish family you’re hiding in your attic. Biting the bullet in this case — being true to your ethics — means the family dies.

Utilitarianism has ugly implications, too. Would you kill one healthy person to redistribute their organs to multiple people who need them to live? The reality is that if a conclusion is ugly enough, the correct approach isn’t to bite the bullet, but to think about how a more reasonable conclusion could comply with your moral theory. In the real world, we should never harvest hearts and lungs from healthy, unconsenting adults, because a world where hospitals would do that is a world where no one ever goes to the hospital. If the conclusions are ugly enough, you should just junk the theory, or temper it. Maybe the right theory isn’t utilitarianism, but utilitarianism with a side constraint forbidding ever actively killing people. That theory has problems, too (what about self-defense? a defensive war like Ukraine’s?), but thinking through these problems is what moral philosophers spend all day doing. It’s a full-time job because it’s really hard.

… and a utilitarianism full of hubris …

Bankman-Fried’s error was an extreme hubris that led him to bite bullets he never should have bitten. He famously told economist Tyler Cowen in a podcast interview that if faced with a game where “51 percent [of the time], you double the Earth out somewhere else; 49 percent, it all disappears,” he’d keep playing the game continually.

This is known as the St. Petersburg paradox, and it’s a confounding problem in probability theory, because it’s true that playing the game creates more happy human lives in expectation (that is, adjusting for probabilities) than not playing. But if you keep playing, you’ll almost certainly wipe out humankind. It’s an example of where normal rules of rationality seem to break down.

But Bankman-Fried was not interested in playing by the normal rules of rationality. Cowen notes that if Bankman-Fried kept this up, he’d almost certainly wipe out the Earth eventually. Bankman-Fried replied, “Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.”

These are fun dorm room arguments. They should not guide the decision-making of an actual financial company.

These are fun dorm room arguments. They should not guide the decision-making of an actual financial company, yet there is some evidence they did. An as-yet-unconfirmed account of an Alameda all-hands meeting describes CEO Caroline Ellison explaining to staff that she and Bankman-Fried faced a choice in early summer 2022: either to let Alameda default after some catastrophic losses, or to raid consumer funds at FTX to bolster Alameda. As the researcher David Dalrymple has noted, this was basically her and Bankman-Fried making a “double or nothing” coin flip: By taking this step, they reasoned they could either save Alameda and FTX or lose both (as wound up happening), rather than keep just FTX, as in a scenario where the consumer funds were not raided.

This is not, I should say, the first time a consequentialist movement has made this kind of error. While Karl Marx denied having any moral views at all (he was a “scientific” socialist, not a moralist), many Marx scholars have described his outlook as essentially consequentialist, imploring followers to act in ways that further the long-run revolution. More importantly, Marx’s most talented followers understood him in this way. Leon Trotsky defined Marxist ethics as the belief that “the end is justified if it leads to increasing the power of man over nature and to the abolition of the power of man over man.” In service of this end, all sorts of means (“if necessary, by an armed rising: if required, by terrorism,” as he wrote in an earlier book) are justified.

Trotsky, like Bankman-Fried, was wrong. He was wrong in using a consequentialist moral theory in which he deeply believed to justify all manner of actions — actions that in turn corrupted the project he had joined beyond measure. By winning power through terror, with a secret police and the crushing of dissenting factions, he helped create a state that operated similarly and would eventually murder him.

Bankman-Fried, luckily, has yet to kill anyone. But he’s done a huge amount of harm, due to a similar sense that he was entitled to engage in grand consequentialist moral reasoning when he knew there was a high probability that many other people could get hurt.

… but the utilitarian spirit of effective altruism still matters

Since the FTX empire collapsed, there’s been an open season of criticism on effective altruism, as well there should be. EAs messed up. To some degree, we’ve got to just take the shots, update our priors, and keep going.

The only criticism that really gets under my skin is this: that the basic premises of EA are trite, or universally held. As Freddie deBoer, the raconteur and essayist, put it: “the correct ideas of EA are great, but some of them are so obvious that they shouldn’t be ascribed to the movement at all, while the interesting, provocative ideas are fucking insane and bad.”

This impression is largely the fault of EA’s public messaging. The philosophy-based contrarian culture means participants are incentivized to produce “fucking insane and bad” ideas, which in turn become what many commentators latch to when trying to grasp what’s distinctive about EA. Meanwhile, the definition the Centre for Effective Altruism uses (“a project that aims to find the best ways to help others, and put them into practice”) really does seem kind of trite in isolation. Isn’t that what everyone’s doing?

No, they are not. I used to regularly post about major donations from American billionaires, and you’d be amazed at the kind of bullshit they fund. David Geffen spent $100 million on a new private school for children of UCLA professors (faculty brats: famously the wretched of the earth.) John Paulson gave $400 million to the famously underfunded Harvard University and its particularly underfunded engineering division (the fact that Harvard’s computer science building is named after the mothers of Bill Gates and Steve Ballmer should tell you something about its financial condition). Stephen Schwarzman gave Yale $150 million for a new performing arts center; why not an international airport?

You’d be amazed at the kind of bullshit they fund

You don’t need to be an effective altruist to look at these donations and wonder what the hell the donors were thinking. But EA gives you the best framework I know with which to do so, one that can help you sift through the detritus and decide what moral quandaries deserve our attention. Its answers won’t always be right, and they will always be contestable. But even asking the questions EA asks — how many people does this affect? Is it at least millions if not billions? Is this a life-or-death matter? A wealth or destitution matter? How far can a dollar actually go in solving this problem? — is to take many steps beyond where most of our moral discourse goes.

One of the most fundamentally decent people I’ve met through EA is an ex-lawyer named Josh Morrison. After donating his kidney to a stranger, Morrison left his firm to start a group promoting live organ donation. We met at an EA Global conference in 2015, and he proceeded to walk me through my own kidney donation process, taking a huge amount of time to help someone he barely knew. These days he runs a group that advocates for challenge trials, in which altruistic volunteers are willingly infected with diseases so that vaccines and treatments can be tested more quickly and effectively.

Years later, we were getting lunch when he gave me, for no occasion other than he felt like it, a gift: a copy of Hilary Mantel’s historical novel A Place of Greater Safety, which tells the story of French revolutionaries Camille Desmoulins, Georges Danton, and Maximilien Robespierre. All of them began as youthful, idealistic opponents of the French monarchy, and all would be guillotined before the age of 37. Robespierre and Desmoulins were school chums, but the former still ordered the latter’s execution.

It reminded Josh a bit of the fervent 20- and 30-something idealists of EA. “I hope this book doesn’t turn out to be about us,” he told me. Even then, I could tell he was only half-joking.

Bankman-Fried has more than a whiff of this crew about him (probably Danton; he lacks Robespierre’s extreme humorlessness). But if EA has just been through its Terror, there’s a silver lining. The Jacobins were wrong about many things, but they were right about democracy. They were right about liberty. They were right about the evils of the ancien regime, and right to demand something better. The France of today looks much more like that of their vision than that of their enemies.

That doesn’t retroactively justify their actions. But it does justify the actions of the thousands of French men and women who learned from their example and worked, in peace, for two centuries to build a still-imperfect republic. They didn’t give up the faith because their ideological ancestors went too far.

EAs can help the world by keeping the faith, too. Last year, GiveWell, one of the early and still one of the best EA institutions, directed over $518 million toward its top global health and development charities. It chose those charities because they had a high probability of saving lives or making lives dramatically better through higher earnings or lessened illness. By the group’s metrics, the donations it drove to four specific groups (the Against Malaria Foundation, Malaria Consortium, New Incentives, and Helen Keller International) saved 57,000 lives in 2021. The group’s recommendations to them from 2009 to present have saved some 159,000 lives. That’s about as many people as live in Alexandria, Virginia, or Charleston, South Carolina.

GiveWell, should be proud of that. As someone who’s donated tens of thousands of dollars to GiveWell top charities over the years, I’m personally very proud of that. EA, done well, lets people put their financial privilege to good use, to literally save lives, and in the process give our own lives meaning. That’s something worth fighting for.

  Read More 

Advertisements