Making a Better World and the Dangers of Effective Altruism
- Jake Browning
- 1 hour ago
- 9 min read
When I first discussed effective altruism with an economist years back, I was shocked to hear her hostility. How could anyone oppose efforts to make the world a better place? So much aid money is wasted on upper management, so many charities are big on theory and short on results, and so often interventions fail. It strikes me, even now, that it is a good thing for people to give more, and to give wisely. And, as such, effective alturism strikes me as fundamentally a Good Thing--one that should be unobjectionable.
Some of my friend's criticisms came from the tradition that regards charity as part of the imperialist project. But, as I pointed out, effective altruism took this criticism seriously and was an attempt to avoid the "white man's burden" paternalism. But my friend highlighted a number of problems with evaluating aid, as well as with general efforts at using NGOs to do the work of government. These critiques were known a decade ago, but they are now becoming mainstream. I acknowledged these were a problem, but they didn't strike me as deal-breakers. Who shouldn't give more to charity?
But my friend's biggest beef was philosophical: the uncritical utilitarianism at the heart of the program. At the time, I assumed it was standard fair rejection of the cold heartedness at the core of the theory, one Mill comments on at length. But I was being naive, and I couldn't really see it until Silicon Valley, AI Safety, and Longtermism got in on the game. In those cases, it was clear that the general idea of encouraging people to give to charity effectively had morphed into something weird--something creepier, more insular, and kinda cultish. It had become Effective Altruism.
Return to Ethics Class
Personal note: I don't love teaching ethics classes. The classes revolve around two main "algorithmic" theories. The first (Kantianism) says we should live our lives according to strict, exceptionless rules, or "duties." The second (utilitarianism) says we should do whatever will maximize the pleasure and minimize the pain for the most sentient beings. The former comes up with a simple algorithm for coming up with duties (the cumbersomely named "categorical imperative"). The latter algorithm is much more complicated because we need to start by gathering up a bunch of data points on different aspects of pleasure and pain--how "fecund" is a pleasure? what about its "propinquity"?--and adjudicate their relative worth--is a short but intense pain better than a long but mild one?
But the problem--which makes ethics classes frustrating--is that there are obvious fatal problems with both algorithms. Kant famously suggests that, if you were sheltering someone from a killer, it would still be wrong to lie to the killer about it. But of course we should lie in that case. Protecting the innocent from murderers is far more important than lying. This might lead you to think utilitarianism is a better theory. But it's also flawed: should a doctor sacrifice a perfectly healthy drifter to donate their organs to 10 people in need? After all, only one person dies so that 10 others can live! But, of course, that is also abhorrent; we should not be killing people to help others, even if we could get away with it without consequence. There are attempts to "save" both theories: Kantians allow some exceptions in some circumscribed cases; utilitarians simplify the algorithm or maximize something else, like replacing pleasure with "preferences." A lot of articles are churned out about it. I don't find much of it interesting, but it is all available if anyone wants to look.
What I highlight to my students, though, is something else. Take the case of the doctor sacrificing a drifter. When I ask students if it is ok, someone blurts out a hunch, someone else rejects it, and then everyone starts arguing about what's best. The algorithms are solid intuition pumps, effective means to get us started on how to wrestle with ethical problems and to recognize options, possible problems, justifications, and so on. And students tend to be perfectly good at questioning the algorithm in the same kinds of ways professional philosophers do. "Won't people stop trusting doctors if they are killing people?" "It's wrong to use someone like that." "The people receiving organs wouldn't want them if that is how they got them." And so on. Students can engage in pretty good critical thinking once the problems are in play.
The take-away I try and push on my students to ethics classes ends up being: "the hard part of ethics is that you need good judgment." There is no simple solution; you just have to balance out a bunch of different factors, like balancing "non-maleficence," "beneficence," "justice," and "autonomy" in medical ethics. In other words, just balance out Kantian and utilitarian theories in a complex, context-specific, empathetic way, producing the best outcomes that balances utilizing expertise without being overly paternalistic, avoiding unnecessary harms while not expending resources that could be used in a better way. Good luck students!
Uncritical Altruism
This is where my economist friend's criticisms came in: a lot of EA people simply deferred to utilitarianism in a really uncritical, mindless way. The obvious example, at present, is Sam Bankman-Fried (SBF). A major proponent of EA, William MacAskill, encouraged SBF to go into finance in order to donate more money to charity. MacAskill reveals really bad judgment here. Finance is a cut-throat industry which often makes money by encouraging companies to pursue profits in ways that are deeply harmful, such as polluting the environment, underpaying workers, or lying to the public.
SBF decided to go into cryptocurrency finance. Another sign of bad judgment. Cryptocurrency--especially Bitcoin--are horribly environment polluting industries. Pollution may be a necessary cost for a good product or service--like building electric cars or providing public transportation. But cryptocurrency, at present, is useless (except for cybercrime). It also is a bad investment and rife with scams and scammers. On the whole, not something net ethically good.
But SBF went further and engaged in widespread fraud, gambling other people's money to invest in more cryptocurrency. Again, bad judgment. When caught, he did a bunch of other stupid things, like trying to erase evidence, violating court orders, bribery, and eventually (against lawyer counsel) taking the stand at his own trial. It all went horribly. Bad, terrible, awful judgment.
And a bunch of EA institutes had taken SBFs money for years even while knowing he was acting unethically. Bad judgment again. As a result, some of them folded, lots of projects were scuttled, grants rescinded, and names besmirched. Pretty much everyone looked awful and foolish.
There are a lot of errors happening here. Maybe SBF is delusionally megalomaniacal? This is possible. Look at his comments on Shakespeare. But why didn't other people grasp how unethical his actions were? Looking over it, it was clear SBF and MacAskill and the other EA folks thought what they were doing was for The Greater Good. They had taken the utilitarian idea of maximizing good and then started making ethically dubious choices around the logic of "noble ends justify ignoble means." Maybe some even still think it worked out: SBF didn't end up losing much money, and lots of aid was distributed. A couple of broken eggs come with the omelet, or so the argument goes.
The Perils of Theory
But this excuse misses the point. The real issue is the reverse: one of the main reasons for teaching the big theories is to highlight how difficult real-world ethics is, how many different possible answers there are, and the dangers of simple solutions or deference to mindless rule-following. If students internalize the message, they will grasp the need to acquire good judgment. The general way of acquiring it, of course, is to start small and slowly build up the practical reasoning abilities to balance all the different factors involved. This is why ethics class typically come with a bunch of case studies, hypotheticals, and other intuition pumps: students gain experience with how hard even simple cases are. Teachers can also use them to trip up confident students, showing them the harms of looking for easy answers to difficult problems. Students hopefully end up humbled by it.
But it's hard to overcome youth's ignorance. Young people like Kantian and utilitarian theories because they suggest it is possible to be wise without experience: just plug-and-play and out comes moral judgment. It's why any theory--in near any field--is so exhilarating for college students. You provide them with this incredibly high-powered framework that turns a bunch of disconnected data points into an entire worldview: here are a bunch of disconnected facts and events but, if you see them through this historical lens, out comes "systemic racism." Here is a set of numbers and here is this model and, bam, out comes global warming. Or an explanation for inflation connected to the money supply. Or an account for why productivity is flat. Or whatever.
This is intoxicating for students--but, sadly, also leads them to being pretty stupid. They rarely learn to apply the theory, instead adopting a rough and ready mental model as a proxy. The mental model typically is too simple to do much good outside easy cases, and it tends to go haywire when extended in model directions. The result is that students end up confident in their own understanding of the world in ways that strike outsiders as patently absurd (again, SBF on Shakespeare).
Simple Solutions to Wicked Problems
The curse of EA is this not unique to it: plenty of theories with a good basis become silly when they are uncritically embraced and overextended. But EA has proven time and again to lure people towards simple solutions to complex problems. In doing so, they often depend on people behaving unethically in various ways: overconfidence tends to lead to paternalism and, with it, a contempt for the agency of others. In extreme cases, it simply ceases to value some lives altogether, effectively writing them off in advance because, statistically, few of them will stack up on whatever metric is of interest. The many people who donate extensively to solving improbable existential risks, while ignoring charities addressing the harms of climate change or the most vulnerable, show that many people become obsessed with theory over reality in a way that causes actual harm.
And this just is why it is dangerous for students to embrace theory over "good judgment." Good judgment is an experience thing, and experience is at least partially an age thing. Aging provides numerous lived counterexamples to the clean, ideal theories learned in college. The effect is transforming the all-powerful, theory-of-everything mindset with the more modest acknowledgement that all models are wrong, but some are still useful. Good judgment allows us to refrain from overextending theory, acknowledging, as Oliver Wendell Holmes put it, "the [theoretical] form and method flatter that longing for certainty and for repose which is in every human mind. But certainty generally is illusion, and repose is not the destiny of man." Theory can help but it can enfeeble us, flattening and simplifying the world too much--preventing us from seeing alternatives or appreciating nuance. While theory transforms the richness of the world into something more intelligible, we need to keep in mind that it does so by obscuring and distorting things. Those things may not matter for the theory, but experience tells us never to assume they won't ever matter--especially when it comes to ethics.
Abandoning the Theory but Retaining the Aspiration
The difference between effective altruism and Effective Altruism, then, can be specified in terms of judgment: the former is good judgment, an acknowledgment that charitable giving is a good trait to be cultivated--a recognition of our privilege and how it can be used to address suffering in the world. EA, by contrast, is theory-driven foolishness: maximizing charitable giving by maximizing one's income; maximizing future pleasure by investing in climate-harming AI that can "save the world" rather than working on reducing emissions today; providing less resources towards "low-value" current humans who will live short, painful lives in favor of "high-value" future humans who will live longer and be markedly happier; push money into speculative, high-risk ventures because the potential rewards are worthwhile; and so on. The whole model is "optimization"--increasing some specific variable to the max.
Any theory that focused on maximizing a single value to the exclusion of others poses a gross danger of fanaticism. A paternalistic goal of limiting suffering often ends up as a license to devalue human life. The various parts of EA know this abstractly; the fears of a superintelligent being destroying the world tend to be based on a system focused on maximizing a single value, like building paperclips or maximizing profit. They acknowledge that, in this "ends justifies the means" system, it is likely it will destroy everything to maximize its single variable. But that their own behavior often imitates this is less clearly perceived.
The leak of critical thinking about the matter ends up making something that should be good into something perverse and sometimes harmful. It is a helpful reminder that there is no shortcut to making the right decision: you need good judgement, and there's no plug-and-part algorithm for that.


Comments