Elon Musk's Charity Philosophy: Let People Starve and, Instead, Donate to Go to Mars.
How the artificially elevated "Effective Altruism (EA) Philosophy" justifies anything billionaire philanthropists want to do.
In 2021, Elon Musk let 42 million people starve because that cause wasn’t deemed “effective” enough by Musk.
Instead, he donated the requested $ 6 billion to himself and directs the money towards space exploration and colonisation, as well as other EA-approved causes, because he believes that we “waste” trillions of human lives per second by delaying it.
Welcome to the bizarre world of “Effective Altruism (EA)”.
If you thought that “women can be men” is weird, fasten your seatbelt.
Elon Musk, the richest and most significant EA supporter on the planet, applied EA philosophy in real life and on scale:
In 2021, the diretor of the United Nations World Food Program mentioned Musk’s wealth in an interview, calling on him and fellow billionaire Jeff Bezos to donate US$6 billion. Musk’s net worth is currently estimated to be $180 billion.
The CEO of Tesla, SpaceX and Twitter tweeted that he would donate the money if the U.N. could provide proof that that sum would end world hunger. The head of the World Food Program clarified that $6 billion would not solve the problem entirely, but save an estimated 42 million people from starvation, and provided the organization’s plan.
Musk did not, the public record suggests, donate to the World Food Program, but he did soon give a similar amount to his own foundation – a move some critics dismissed as a tax dodge, […] Source
Tech-billionaires have a problem.
They constantly get hassled by charities, and they don’t want to look like the greedy, heartless half-austistic nerds they are. They need to pretend they care about the rest of the world when they don’t. They need to pretend they don’t want to be rich and give away their wealth to charity.
And EA is the perfect philosophy to justify anything they want to do and mask it as “effective altruism” instead.
Tech billionaires, often secretly, fund a wide range of non-transparent NGOs that do so-called EA “research” projects and “surveys” which make outlandish claims that “AI is sentient” and is believed to be so by up to 90 million Americans, that digital minds “suffer” and need a bill of rights, and the welfare of carp, tilapia, and shrimp, and “adversarial robustness research.”
They also still support the evergreen causes of “global health”, vaccine development and “biosecurity”, which have served humanity so well over the past five years. Or not?
Just giving food, clean water, dignified housing and basic medical care is too old-fashioned and ineffective for our new class of tech billionaires.
Nick Bostrom, nicknamed the “the Swedish superbrain”.
Nick Bostrom, philosopher, and another Oxford-trained EA intellectual - is all-in on everything that suffers - from animals, to digital minds to future people - except real living people - writes bizarre transhuman essays, like the “Astronomical Waste Paper:
He [Bostrom] is also a fan of space expansion, claiming in his Astronomical Waste paper, retweeted by Musk, that we waste 100 trillion human lives for each second that we do not colonize space. Source
This “futuristic charity” called “Longtermism” has been made popular by EA top philosopher William MacAskill in his 2022 book.
His new book, What We Owe the Future, argues we should expand the moral circle even further: if we care about people thousands of miles away, we should care about people thousands or even millions of years in the future.
The book, which has been praised by the likes of Stephen Fry and Elon Musk, makes the case for “longtermism,” the view that positively influencing the long-term future—not just this generation or the next, but the potentially trillions of people still to come—is a key moral priority of our time.
Willliam MacAskill
I don’t think we owe the future anything. That’s just intellectual guilt-inducing garbage. There is no way of knowing the future, so how can we owe anything to it?
Life only happens in the here and now. Everything else, especially the past and the future, is purely mental, transient and not significant for true happiness.
Everything affects everything, and the whole universe, and whatever happened since the beginning of time, contributes to the next moment. The universe is incredibly complex and unpredictable.
Only people with a technocratic God-complex would claim that they can assess how current actions will affect human flourishing in the future. Especially several generations into the future. This madness and propagnada to justify to advance the transhuman cause and personal pet programs rather than to solve current problems with wisdom and empathy. It is rich people’s escapism to maks their greed and stinginess.
In 2021 alone, almost 300 million were allocated to longtermism. And this doesn’t include Musk’s secret investments.
The vast majority of EA’s newfound wealth comes from two tech billionaires: Dustin Moskovitz and Sam Bankman-Fried. Open Philanthropy, which is primarily funded by the Facebook and Asana co-founder Moskovitz and his wife Cari Tuna, distributed more than $440 million in grants in 2021, a third of which went to global health and development, 28% to longtermist interventions (such as biosecurity and EA community growth), 18% to animal welfare, and the rest to research in areas such as economic policy and criminal-justice reform. Four months after launching in February this year, the FTX Future Fund had committed more than $130 million in grants, mostly to longtermist causes. Source
How much money Musk himself poured into longtermism is unknown.
What we do know, however, is how much Musk did to slingshot EA into the public consciousness.
Almost four months before its official release, Musk promoted MacAskill’s new book to his 100 million followers on Twitter and outed himself as an effective alturist ideologist.
This tweet in April 2022 launched EA into the world’s consciousness and established a previously weird fringe philosophy of a few thousand people as the new transhuman technocratic dogma. Don’t care about what happens now, care about the future.
MacAskill must have given him the manuscript, and I wonder how much Musk influenced this book, infiltrating it with his ideas. Why else would the Scottish moral philosopher give it to Musk long before its official release?
After going almost unnoticed for 13 years, EA suddenly was everywhere.
In the weeks after Musk’s tweet, many high-profile glowing articles in significant magazines appeared (here, here and here).
And the new EA narrative was fully established and approved by the scientific pseudo-intelligentsia elite when podcaster Sam Harris offered MacAskill his huge platform in August 2022. The other EA philosopher, Peter Singer, was invited in 2023, and MacAskill again in 2024.
However, this was just the endgame of what was started by Musk's buddy Joe Rogan in 2017. In 2017, EA was a tiny, eccentric, hardcore little group of emotionally challenged radical vegan do-gooders that was formed in 2009 and didn’t report any growth since then.
In the 2015 book Strangers Drowning, journalist Larissa MacFarquhar describes a frustrated and isolated Julia Wise [who is the Centre of Effective Alturism longest serving employee], then in her 20s, as believing she was not entitled to care more for herself than for others. In one memorable episode, her boyfriend buys her a $4 candy apple and she weeps bitterly, feeling immense guilt that she might have deprived a child of a lifesaving anti-malarial bed net.
[…]
“Young adults want to be hardcore about something, and I decided to be hardcore about sacrifice,” she tells me, with a soft laugh.
And this is MacAskill talking:
On top of that, some 80 billion land animals are killed for food every year. “That’s just a f-cked up place to be,” he says.
As he sees it, pure moral philosophy leads to the conclusion that the correct thing—at least for someone with his privileges in a wealthy country—is to sacrifice everything you could for the greater good.
“The question is,” he says, “how do I manage my life such that even though I believe that at the fundamental level, I don’t go completely insane?”
There is natural and normal empahty and there is this pathological empathy, a kind of sacrifying ego grandosity that feels it is personal responisble for the suffering in the world.
While there is great values in realising all the suffering in the world, it needs to be combined with the spirtual wisdom that suffering is unavoidable and ultimately mind-based. This realisation should lead us on a journey to address and fully realise the root-cause of all suffering: Desire and fear.
From there we will automatically mitigate suffering spontaenously and practically wherever we can around us in the here and now without getting overwhelmed by it.
That’s what many wise and compassionate mystics taught us over the Millenia.
Many people don’t realise that trying to be a super-do-gooder and great sacrificer is the same ego trip than being an egocentric selfish person.
I recommend he works on himself and get his mind and emotion in balance before he unleashes his crazy, hearless and frankly stupid philosophy on humanity and does a lot of harm with it.
If his doomish mindset affects billions of people, the infamous “by 2030 you own nothing and will be happy” might come true voluntarily. This is just another example of how warfare shifted to conquer our minds with subtle, guilt-inducing propaganda. No force is needed.
While I am all for better treatment for farm animals, this can be achieved through traditional means. I don’t have any problems eating meat. Humans and many other animals always did when it was available. Life feeds on life. This is natural.
And make no mistake. Positive named projects like “global health”, “animal welfare”, “AI Safety”, “Biosecurity”, etc. are just code that look like they serve humanity when the deeper plan is to depopulate through them.
The EA philosophy was always controversial, even amongst other philosophers, let alone the common-sense public.
Wikipedia:
The drowning child analogy in Singer's essay provoked philosophical debate. In response to a version of Singer's drowning child analogy,[48] philosopher Kwame Anthony Appiah in 2006 asked whether the most effective action of a man in an expensive suit, confronted with a drowning child, would not be to save the child and ruin his suit—but rather, sell the suit and donate the proceeds to charity.[49][50] Appiah believed that he "should save the drowning child and ruin his suit".[49]
In a 2015 debate, when presented with a similar scenario of either saving a child from a burning building or saving a Picasso painting to sell and donate the proceeds to charity, MacAskill responded that the effective altruist should save and sell the Picasso.[51] Psychologist Alan Jern called MacAskill's choice "unnatural, even distasteful, to many people"
Imagine the horrors this mindset could justify when scaled up significantly by the money of the richest and most powerful people in the world.
As in, trillions of future lives in 500 years are worth my support, but 42 million starving people are not.
No wonder such nonsense didn’t get any traction with the public at all. But since when did that ever stop the God-complex tech billionaires who have the power and means to “shape” narratives and culture and the beliefs and attitudes of the people?
The three main EA books by Peter Singer and William MacAskill, published in 2009 and 2015, respectively, ranked low on Amazon, had few validated reviews per year and were hardly read. I coudn’t find any sales numbers and I am sure I would have if they sold well.
After the release of the book [in 2009], Peter Singer founded the organization The Life You Can Save.[11] […]
The organization also encourages people to publicly pledge a percentage of their income to highly effective aid organizations and gives recommendations for about a dozen of such charities. In 2014 the number of people who had pledged publicly reached 17,000. Source
That’s 17.000 in five years. A bit more than 3000 worldwide in a year. Not exactly a success story.
In an interview on the 14th of September 2022, MacAskill
estimates the total of the inner-circle of EA supporter are at 10,000 now, up from 100 in 2009.
A growth from 100 to 10.000 EA followers in 13 years is hardly spectacular and doesn’t hint at any grassroots movement. EA followers made up approximately 0.000125% of the world’s population in September 2022.
But they feel entitled to tell the world into what longtermism charity causes the world should invest.
Keep in mind, they reached 10.000 in 2022,
After MacAskill was mysteriously invited to Joe Rogan in 2017 (Episode #930)
After Peter Singer was also invited mysteriously to Joe Rogan in 2018 (Episode #1230)
After MacAskill was mysteriously invited to a TED talk in 2018
After two tech billionaires mysteriously poured around $600 million dollars into EA in 2021 alone, quadrupling their funding and creating 200 EA chapters around the world and flying 6000 EA activists to numerous conferences around the world
After MacAskill was promoted in April 2022 to 100 million of Elon Musk’s Twitter followers
After many glowing and endorsing articles were written about EA in the biggest mainstream magazines between May and September 2022, and
After MacAskill was invited to the Sam Harris podcast
In 2017, EA was a tiny group of hardcore pathalogical sufferers. To compensate they created a heartless, rational, half-autistic transhuman philosophy based on three unsuccessful, hardly read books.
And then Joe Rogan invites each of the two thought leaders on the biggest podcast on earth. What are the odds?
Elon Musk met MacAskill in San Francisco in 2015.
In 2017, Igor Kurganov, a professional poker player, founded the EA NGO “Raising for Effective Giving.”
Musk employed Igor Kurganov to help him with the Musk Foundation. It is unclear exactly when but before 2022.
Kurganov’s girlfriend, also a professional poker player, used to be MacAskill’s flatmate.
Elon Musk is a good friend of Joe Rogan
Joe Rogan invites a virtually unknown moral philosopher with a huge guilt complex that has written one virtually unread book two years before.
Connect the dots.
Joe Rogan Experience kicks off the making of another Greta Thunberg-like character.
Reading about his early life, it seems MacAskill is genuinely and authentically into all this guilt-ridden garbage.
It appears he is just another useful idiot who is used as a believable and attractive poster boy for a fake grassroots movement until his purpose is served and he gets disposed of by Musk. Pretty much like Bill Gates pretty much like Greta disappeared from the news cycle when her purpose was fulfilled.
Let that sink in:
After being exposed and funded by probably one billion dollars from 2017 to 2022 and exposed to hundreds of millions of people through free media and podcast coverage, all EA managed was 10.000 supporters?
That tells you everything about how “catchy” their crazy philosophy is to most people.
But since when did this ever stop the superrich elites with a God-complex who think that money and exposure can birth any narrative that suits them?
While Elon Musk sits like a spider in the middle of this EA publicity launch and very likely had huge influence on the new longtermism addition to the EA philopphy in 2022, his own money trail to EA is well hidden.
After refusing to donate to save 42 million of starving people he instead donated the requested 6 billion to himself.
Despite the fact that Musk gave the money to himself and saving two million in taxes the captured mainstream media uncritically declared him to be one of the largest philanthropists in 2021, when in reality, nobody knows where most of the money was spent.
Good luck consulting the barebones Musk Foundation webpage for any clues. It is a complete joke. All you get, I repeat, all you get, is this:
This is it. The whole webpage.
No links, no tabs, no contact details. No way to apply for grants. Literally, that screen above.
According to the biographer Walter Isaacson, Musk has little interest in philanthropy. He believes that he can do more for humanity by leaving his money in his companies and pursuing the goals of sustainable energy, space exploration and AI safety with them
And no real obligation to tell anyone what he really funds with his money.
In 2021 and 2022, the Musk Foundation awarded less than 5% of its assets in donations, after its assets grew to several billion dollars.[
So how much Musk gave to EA is unknown.
We only know that other big tech billionaires are giving generously to EA.
This hints to a wider conspiracy and broader susport to add “Future Panic”, “AI Panic”, “End-of-the-world Panic” and we “have to escape into space doomsday panic” to the “Climate Panic”.
The message is clear - give away everything, stop doing anything, eat anything natural and substancial, be complelty guilt-ridden, worry more about farm animals than world hunger, worry more future unborn suffering than existing suffering, sacrify your life, well-being and enjoyment of life for the life of farm animals, future generations and the wellbing of “digital minds” and…….. Tipola fish.
The war over our minds is in full swing. Especially the less critical and more penetratable minds of the younger generation.
What really stands out with EA is that the most known big donors are nerdy, billionaire tech bros or professional poker players.
I wonder what they have in common?
Bluffing maybe?
So why do these nerdy tech bros, also disproportionately linked to AI bots, love effective altruism so much?
Two main reasons.
Firstly, it appeals to their nerdy, heartless nature.
Traditionally, charities are driven by heart emotions like compassion and empathy.
But EA doesn’t do heartfelt compassion and empathy. It strictly does “reason and evidence.”
Benjamin Soskis, senior research associate in the Centre on Nonprofits and Philanthropy at the Urban Institute, agrees:
[But] a common concern is that the movement’s [EA] rational assessment of causes removes emotion from giving—that it has an “unfeeling, robotic, utilitarian calculus […]
No wonder it specifically attracts highly intellectual philosophers and technocratic nerds who live in their heads 24/7 and are often incapable of feeling real compassion and empathy. Quantifying “helping” and putting it into a mathematical formula works for them.
While the EA mantra of using solely evidence and reason to chose a charity sounds reasonable in theory, it proposes extremely grotesque and unnatural solutions when it applies its lofty, purely logical philosophy to practical cases as pointed out above.
All this comes down to mind vs. heart, logic vs. compassion, calculated considerations vs spontaneous heartfelt action, and rational technocratic materialism vs spirituality.
A key component of effective altruism is "cause prioritization". Cause prioritization is based on the principle of cause neutrality, the idea that resources should be distributed to causes based on what will do the most good […] Wikipedia
This is insane.
What is “Good” is a relative and individual value. How can this ever be objectively and mathematically quantified?
What does “the most good” mean? And good for whom?
The donor? The technocrats? The animals? Digital minds? The planet? The future?
It turns out, without any rhyme or reason, that the most EA money goes to “the most good” of biosecurity (global health), farmed-animal welfare, and AI safety and, since Musk took charge, longtermism.
Not starving children or homeless people, or global poverty or countless other obvious causes for suffering. Like the drowning girl, these people don’t suffer enough to make the EA cut.
In other words, the EA NGOs that advise where to invest charity money play God by deciding who should live and who should die, and they brainwash the world with completely unproven concepts of who suffers the most.
The 2nd reason these tech billionaires love EA is it’s flexibility.
EA is tremendously flexible and adaptable, and can justify about any transhuman pet project, as the Astronomical Waste Paper and it’s endorsement by Musk proof.
If your hobby and passion is to colonise space, which isn’t cheap, EA will find you the right justification and eventuall public money to turn it into a transhuman project and philantropic cause and nobody will know on what kind of research you spend your billions on.
If EA considers the welfare of shrimp and Tilapia fish more effective than 42 million starving people, Tilapia fish it is.
And if you wondered, Tilapia are mainly freshwater fish native to Africa and the Middle East, inhabiting shallow streams, ponds, rivers, and lakes, and less commonly found living in brackish water. (Wikipedia)
I care about the welfare of all living beings.
I am mindful not to kill thoughtlessly and unnecessary.
But other humans that live now come first for me.
This is in line with all other species. No species would put another species first. No frogs, no elephants and no Tilapia fish.
There is definitly room for improvement how we treat other species, espeically farmed animals, as there is definetly room for improvment how we treat our enviroment. But propaganda induced panic solution won’t do that. They are scams that enrich the superrich.
EA is a charade.
EA is a weaponizing of genuine concerns about other species and how we create a better, more humane world.
The same weaponizing that happened with the genuine and needed concerns about our enviroment.
EA is an artifically elevated madness to justify the transhuman madness.
Ea is antoer scame by the elites to get richer and more powerful.
Thank you for reading.
If you like my writing, please support it.
You can either make a one-off donation of your choice at
or take up a monthly ($ US5) or yearly ($US30) paid subscription below.
Thank you.
There was a time when anyone making decisions that benefit them, was called corruption….
80billion per year he said... Not per day