243 Comments
User's avatar
⭠ Return to thread
Bill Anschell's avatar

You’re one of my very favorite writers, and I feel like I learn a lot from your column. I don’t know how you’re able to read so much, listen to so much, and write so much; your productivity is astounding, and you do it all with unique insights.

Today’s column is the first time I’ve felt you were unfair, and not just a little.

My son works for an Effective Altruist (EA) organization founded by a multi-billionaire to research how he can spend his billions of charitable dollars to do the most good. My son lives and breathes EA, and I have never heard him talk—not once--about "maximizing pleasure." He’s currently in Ethiopia leading a delegation of South Korean parliamentarians, showing them vaccination facilities and other cost-effective ways to save lives in a country that needs support; the hope is that the parliamentarians will lobby their government to increase funding for such projects. EAs tend to focus on countries where the most good can be done at the least cost, so developing countries are at the top of their list.

It’s true that many EAs are consequentialists, but not in the name of having a good time. They deal with dicey equations like how much sacrifice today is justifiable to achieve a better tomorrow. Similarly, they’ll make the difficult suggestion that resources spent to do good on the local level in this country would be better spent in another country where the dollars—and the good that can be done—go much further. That part of EA can make a lot of people uncomfortable, and understandably so, but that doesn’t make it wrong.

Aspiring EAs have traditionally had two primary career choices: working within EA to identify and promote charitable causes that give the most bang for the buck (i.e., lives saved or substantially improved per dollar spent), or “earning to give”—following the path that maximizes the amount of money they can make and thereby eventually donate. Samuel Bankman-Fried gave the EA movement a big black eye by twisting “earn to give” to allow ripping off investors and shareholders. Effective Altruists would not support any such unethical activities, and they’ve dialed down the whole “earn to give” side of the equation as a result of what he’s done. Samuel Bankman-Fried may have started as—or claimed to be—an Effective Altruist, but in no way does he represent the movement. The day his criminal activities were revealed was absolutely brutal for Effective Altruists; not only did he damage the movement in the public eye, but billions of dollars that were expected to go to charitable good vanished. To be very clear: The EA movement would have endorsed his plan to make as much money as possible to donate to worthy causes (if he ever really meant that), but they would never have endorsed the way he went about it. He can call himself an Effective Altruist, but I challenge you to find an Effective Altruist who would want anything to do with him.

I’m curious where you came up with the idea that Effective Altruism is about maximizing pleasure. Is it in writing somewhere? If not, I think you’re being grossly unfair, and I honestly don’t understand why; it doesn’t seem at all consistent with all the well-researched and unerringly fair columns you’ve posted in the past. The whole device about EAs supporting the idea of Granny being sold to sex traffickers to maximize human pleasure in the long run seems—and I hate to say this to someone as deep and thoughtful as you—completely disingenuous and terribly misleading. I challenge you to find a single Effective Altruist who would support it.

And, yes, my progeny is an Effective Altruist, and I’m very proud of him. He lives his life to achieve the most benefit for mankind (and animals as well—animal welfare is a major EA concern); he puts me to shame. "Maximizing pleasure" is part of the equation only insofar as it makes him feel good to have a positive influence on the world.

My son is based in the Bay Area, and I would love for you to get to know him to see what kind of “hate monger” he is. I’m sure he’d welcome the opportunity to talk with you about it.

Expand full comment
Ted Gioia's avatar

It sounds like your son is doing good deeds. I commend him, and I congratulate you as a parent.

But I commend charitable works of this sort even if they are done just out of compassion and good will. They certainly don't need a vague consequentialist philosophy to validate them. People did good deeds of this sort long before Effective Alltruism even existed, and didn't require elaborate justifications.

The most important thing is that your son is working to help others in a very concrete way, and this is absolutely praiseworthy. Well done.

Expand full comment
Thomas del Vasto's avatar

Many people in the EA movement are kindhearted, and good. I was a part of it for a long time.

That doesn't mean the philosophy is good overall however, or promotes the right things. There are absolutely bright spots in EA, but the blind spots are so large and can lead to so much harm that I don't think the confidence most EAs have is warranted.

That confidence is a double edged sword because it is what allows so many EAs to devote their lives to the cause - but it also leads them down really dark paths.

Expand full comment
Chris Grasso's avatar

Thank you so much for your comment. I'm an agnostic about EA, but, as I said in a comment above, Bankman-Fried's use of it could easily have been a perversion of the philosophy in the same way people pervert the principles of different religions for nefarious ends. You seem to come to a similar conclusion based on a a much deeper knowledge and personal experience with EA. Again, thank you.

Expand full comment
AJKamper's avatar

Yes, our host is eliding all Effective Altruism into utilitarian consequentialism, which is a philosophy that is simply bankrupt.

My hope is that we are going to have a lot of EA types who simply want to do good cast off the utilitarianist roots of their philosophy after seeing how easily it can be misused. I think we’re already seeing some of the thought leaders constrict their utilitarianism so tightly to avoid such abuses that it effectively turns into Kantianism. If that happens, then EA will be just fine and we’re going to have a bunch of rationalist-cult people with the bad parts of the philosophy who are not part of the movement but try to use the branding as if they were.

Expand full comment
Thomas del Vasto's avatar

Yes but if you have any idea who pulls the strings in EA, an extremely hierarchical movement, you'll realize that utilitarian consequentialism rules the day.

Expand full comment
AJKamper's avatar

Oh, that's fascinating! I had not heard this. I of course know of the loud thought leaders, but I hadn't ever thought of it as hierarchical. Care to provide background/examples?

Expand full comment
Thomas del Vasto's avatar

Open philanthropy controls the vast majority of the money/leadership in the EA movement. Overall the movement is highly elitist and keeps power in the hands of a loose organization of folks at the top.

https://forum.effectivealtruism.org/posts/54vAiSFkYszTWWWv4/doing-ea-better-1

https://forum.effectivealtruism.org/posts/dsCTSCbfHWxmAr2ZT/open-ea-global

Expand full comment
Jochen Weber's avatar

Thanks for posting those links! I only started reading the first... My overall problem (also maybe one major pushback against the Oxford school of philosophy) is the hubristic nature of the thought that "if I am careful enough in my mental modeling, I can predict the future."

I suspect that this kind of attitude (maybe born out of a desire to control the future, out of a fear of death and demise) is at the core of all philosophies that "go wrong" (lead to more harm than expected). No matter how careful we study reality, I don't think we will ever be in a position to *completely* foresee the complex feedback loops that push on reality once we implement a change. As such, the best I can think of is always to (1) acknowledge that information and modeling is imperfect, (2) implement changes gradually using biologically informed intuitions of "good and bad" (or evil) to avoid otherwise rationally justifiable pitfalls, (3) humbly observe the results, and (4) accept that results will likely differ from predictions, and then equally carefully course-correct.

This whole notion of being able to use some sort of crystal ball of rationality to predict how things will turn out a year (let alone a century) from now seems so debunked, it is strange that any serious philosophy would still entertain that concept...

Expand full comment
Thomas del Vasto's avatar

Yeah this nails a big problem with utilitarianism in my estimation too. It's easy to get sucked in when our models are so large and powerful nowadays, but on the other hand the results are much worse when we get things wrong BECAUSE our models and technologies are so powerful.

Expand full comment
AJKamper's avatar

Thank you! That's very helpful, and I'll scratch away at those as I have time (20K words for the first one! Goodness.).

That said, this post kind of illustrates my point: that there are undercurrents in EA that are trying to get it right, especially in light of the collapse of FTX. I _hope_ they will be successful; I would not blame anyone for being doubtful that it will come to pass.

Expand full comment
Thomas del Vasto's avatar

It seems that after the FTX fiasco there was indeed a large upswell of people trying to move away from the strict utilitarian control of those at the top. Unfortunately from my perspective that upswell failed, and most of the people trying to change the movement have either left or been pushed out.

Controlling the purse strings means a lot in this sort of nascent movement.

Expand full comment
AJKamper's avatar

Well, s***.

Expand full comment
Thomas del Vasto's avatar

I'm a bit cynical because I personally struggled through it, but yeah. If you go through the Doing EA Better post I linked upthread it represents the major push pretty well.

As far as I have seen pretty much none of the critiques in that post have been implemented or seriously considered.

Expand full comment
Lynn Edwards's avatar

I agree with you that this is what people, and I have trouble with:

"They deal with dicey equations like how much sacrifice today is justifiable to achieve a better tomorrow. Similarly, they’ll make the difficult suggestion that resources spent to do good on the local level in this country would be better spent in another country where the dollars—and the good that can be done—go much further." I think that Thanos from the Marvel movies is a great example of an Effective Altruist, along with most Hollywood villians. I also think that if you really want to make an impact, support your local nonprofits.

Expand full comment
bio terry's avatar

Nothing altruistic about vaccination ha ha.. you mentioned he showed people around vaccination facilities... not mentioning the harm these vaccines are doing and did. And not.to mention the word "philantrope" and how much the media used this specific word for the last 3 years when they mention Bill Hates, all to the purpose of the global citizens accepting this man as being the most altruistic man alive today...

Expand full comment
Becoming Human's avatar

If you take away the formulas for predicting the future and the clearly sociopathic learn, earn, return part of EA, then you are left with altruism, which it appears your son is engaged in, and you should be proud.

EA is now part of a story that includes artificial general intelligence and interplanetary travel. Just think about that. Both of these will consume unbelievable resources, and at least one could kill us. That is not altruism, that is just the same old delusion that held every tyrant through history in its thrall.

The dirty secret is that the most effective form of altruism in the world is fairness. When labor is not exploited through extraction, both happiness and most life quality indicators rise. As a result, any "altruistic" approach that demands extraction as its starting point is already predetermined. EA has been a strong proponent of hedge funds, for instance, which are just machines for extracting wealth from ordinary folks. If the proponents were intellectually honest, they would agree that minimizing wealth disparity is the single most effective form of altruism - proven from the failure of the poor houses of England to the success of social democracies of northern Europe.

EA is not a coherent philosophy for maximizing the quality of life of humanity, it is yet another complex rationalization for the accumulation of wealth and power without external accountability.

Expand full comment
Kenneth Morena's avatar

Excellent points, and well said.

EA's mandate can and should be far simpler, were it to be sincere: namely, to increase altruistic activities that take as their priority the reduction of current suffering, rather than increasing pleasure (and especially the pleasure of future, potential life. Looking at you, Long Termism.)

Expand full comment
Liam's avatar

I could talk about capture of some parts of EA by utility monsters, such as AGI, or "Long Termism", which may stem from being too close to wealth and power, but I would rather consider paths not taken

One, Could the wealthy donor renounce tax minimization and set fair transfer prices within the companies they control? Could they promote improved wages and working conditions within supply chains they have influence over? Could they promote more lenient intellectual property rights and technology transfer, at both a corporate and state level?

Two, is the whole problem addressed? If mosquito nets, for example, are the cheapest form of poverty alleviation, why are people lacking them? Is it that mosquito nets are unobtainable, or the result of other priorities that might see them used as fishing nets?

Three, what is the cost of flexibilty? The medium or long term benefits of a program? Change to priorities of community, provinical, or national governmenets due to public pressure and interest? Intangibles such as the trust of communites toward NGOs. Are these considered before reallocating funds based on greater marginal benefit?

One strange tendancy I saw in the movement was "thinking like a machine", relying on bayesian inference, monetary valuations, and taking descisions as spot transactions. Another was conflating data with observation. Data is a symbolic representation of an observation and depends on sampling decisions taken during the conversion between observation and symbol.

Expand full comment
The Ancient Geek's avatar

Mosquito nets are supposed to be a form of life prolongation,.

Expand full comment
forumposter123@protonmail.com's avatar

People have been trying to use metrics to enhance charitable giving for long before anyone thought to call themselves and effective altruist. Many charities spend a lot of time on such metrics.

So I don't think the idea of having metrics is new.

And I'm not sure EAs necessarily have a better bead on what metrics or how the measure them then other charities trying to do the same.

So what's new? What groundbreaking new idea in charity does EA represent? "We can save more lives in the third world" is not an earth shattering new idea.

I'm not sure there is one.

So what is EA but dressed up consequentialism? Why does it need a special name? Didn't the Gates Foundation already use metrics to steer African charity?

If we dig into EA, I think we would find the same old debates about assumptions we've always had. What more effective for human flourishing, increasing African population through mosquito nets or giving R&D funding to the worlds most capable individuals? Are political donations cost effective? Are for-profit business ventures more useful to society than non-profit work? Which ones?

I'm going to be harsh for a minute, but "my sons is trying to convince governments to give away more third world aid to Africa, which has a miserable track record of doing anything useful long term" could easily be worse then him just getting a regular job that makes products and services people want.

I mean if he thinks otherwise that's fine, but I don't need a philosophy lecture on his rationalization for doing so (justified or not).

Expand full comment