For about five years, I was swept up into a cult-like ideology called “Rationalism”, via the writings of cult leader Eliezer Yudkowsky on his website Less Wrong. I’m ready to talk about it.

Tweet by Geoffrey Miller @primalpoly on 2017 October 22: “Canada decides that ignorance of the law makes it OK to inflict multiple felony sexual assaults on a woman.” Quote-reply by Eliezer Yudkowsky @ESYudkowsky on the same date: “#UnpopularOpinion: Unavoidable ignorance of a law, because your society has too many laws to learn, is a defense.”
Eliezer Yudkowsky, the greatest genius of our time (sarcastic).
Nota bene: this Yudkowsky quote is cribbed from something very similar that Ayn Rand said about rule of law. As usual, he does not cite her or acknowledge her for it.

I don’t remember exactly when I first encountered Eliezer Yudkowsky’s writings about “Rationalism”, but I remember some of the context. I was a frequent reader of the tech news blog site called Slashdot, and someone in the comments linked to one or more of Yudkowsky’s posts as a guest blogger on “Overcoming Bias”, a blog owned by Dr. Robin Hanson, PhD, associate professor of economics at George Mason University.

For some of you, the very fact that I just said “Robin Hanson” and/or “George Mason University” just sent your hackles up. Good. This is the correct reaction.

At the time I had fallen fairly deep into the libertarian rabbit-hole. In the years immediately prior to moving out to the San Francisco Bay Area in 2008 and starting my career in the tech industry, I had been frequenting websites devoted to Austrian School economics, including the forums of the Von Mises Institute. Very Objectivist-adjacent stuff. It was the timeframe of the 2007 crash, people were aching for some framework that explained it, and the Austrian School folks were using the crash as fantastic advertising, even though their explanation was more bogus than not.

But I was starting to get disillusioned with it: I am the sort of person who wants to understand everything about everything, and so I ask questions when something doesn’t make sense to me, on the frequently naïve assumption that I am the one who has missed something. And a lot of things didn’t quite add up, especially when I actually bought a copy of Human Action by Ludwig Von Mises and he was basically saying that, uh, empirical science is impossible in economics? And everyone in the community was very against government intervention… up to the moment when I pointed out that companies like Wal-mart and Amazon could not exist as-is without the US federal highway system, that trucking in general is the primary driver of highway maintenance costs because road wear is proportional to weight-per-axle taken to the fourth power. Suddenly the libertarians were all crickets, when I pointed out that their Captains of Industry were leeches, too.

Anyways, in terms of intellectual food for my brain, this is what I had surrounded myself with.

(It didn’t help that my paternal grandmother was an Objectivist, a eugenicist, and a card-carrying Mensa member who bragged about her good genetics and high IQ at every opportunity, and whose likely reason for having six kids with a highly intelligent violent alcoholic was that she was doing her part to propagate the “master race”.)

“Overcoming Bias”, and Yudkowsky’s own blog “Less Wrong” that came later, were both devoted to identifying the ways in which the human brain contains unconscious biases. The idea was that, if we just tell people about all the ways that they are being irrational, then they will magically know how to compensate for those biases and behave in a perfectly “rational” manner. That one becomes unbiased and rational merely by choosing to be unbiased and rational, and by reading lots of random psychology and sociology studies with no effort toward understanding the field well enough to interpret them correctly.

The fact that an economist and his dropout autodidact protégé were talking about neuroscience and psychology did not set off any alarm bells, for me or for many others in Silicon Valley. What actually happened is that the Overcoming Bias / Less Wrong audience got better and better at spotting biases in other people, but more and more blind to their own, because that’s one of the biases in the brain and you can’t just turn off your biases because they’re how your brain actually works. The human brain is not “irrational” for no good reason: it is bounded by physical constraints of time, energy, and entropy, and it is literally impossible to be rational in real time, at all times, about all things.

Anyways. Even though there were warning signs, a few of which I noticed early, it all seemed like pretty useful stuff to me.

And then Yudkowsky started talking about Bayesian statistics. At first, I thought it was great. Sitting down and reading both Probability Theory: The Logic Of Science by E. T. Jaynes and Causality by Judea Pearl on Yudkowsky’s recommendation was a lovely experience for me, since I’m a huge graph theory math nerd anyway, so PGMs are right up my alley. At a certain point, I started to notice that I seemed to understand those books better than Yudkowsky did, especially when it came to the computer science end of things, such as how the time complexity constraints of the algorithms involved make it impossible to do Bayesian reasoning in real time. (In particular, he didn’t understand the difference between an online algorithm and an offline algorithm, and treated all algorithms as potential online algorithms no matter how complicated.)

But I still felt like I was learning.

And then Yudkowsky started talking about quantum physics. I already knew about the double slit experiment before Yudkowsky, but he taught me a little bit about quantum mechanics that I didn’t know before him. He sure seemed to have a soft spot for Richard Feynman, however, even on topics where Feynman was not the best expert he could have picked to learn from.

And then Yudkowsky started talking about philosophy, badmouthing some well-regarded philosophers while praising others. But even though he always cited sources when he trashed them, he never cited sources for the ideas he presented as true — an awful habit I unfortunately picked up from reading his work. It eventually turned out that his good ideas were not original and his original ideas were not good. Big chunks of his work were cribbed from Ayn Rand’s Objectivism, a complaint that he tried to head off at the pass by making it clear that he thought Ayn Rand had failed… but while he criticised the fact that Objectivism devolved into a cult-of-personality around Rand that operated on groupthink, all he could offer about himself and the Less Wrong community was the hollow assurance that “Less Wrong is different, you guys!”.

And then there was the fiction writing.

He wrote his stories as a way of showcasing specific ideas that he was trying to teach his audience in The Sequences. That’s not an inherently bad idea, because coming at a new idea from multiple angles is often an effective way to communicate it. But for a lot of his short stories, I found myself liking them on first read but getting more and more uncomfortable with them on re-reads, and it slowly dawned on me that the reason I liked the stories at all was because they were exposing me to new ideas, not that those ideas actually made sense with what I know, or even with what Yudkowsky himself had said at previous points in The Sequences. In particular, the idea that “rape is no big deal” came up in quite a few of them, or that women and men are so different that they should be treated as different species, or that genetic diversity leads to disagreement and conflict so we should get rid of it via eugenics and/or apartheid.

This, of course, eventually led to The Fanfic That Shall Not Be Named. His stated goal was to “raise the sanity waterline” by sneaking the ideas from The Sequences into Harry Potter fanfic that would make people more amenable to what he called “Rationality”, rehabilitating Voldemort’s side by making them just really concerned about genetic purity but going about it in the wrong way.

Oh, and I haven’t even mentioned cryonics or artifical intelligence yet.

Yudkowsky is an “extropian” and “singularitarian”. Almost everything he did and does has ultimately been in service to those closely-related ideologies.

Extropians believe that entropy, the physically unavoidable loss of information to the environment as random noise, one of the deepest laws of physics, is the enemy of mankind. In particular, extropians are big supporters of cryonics, which is freezing your brain at the moment of death in the hopes that someone in the future will resurrect you into an immortal being living in a society where all needs are met.

(Which is a weird position for a cyber-libertarian. Some individual or organization in the future is going to build me a new body and wake me up in it, out of the sheer goodwill in their hearts? Not so they can profit from me by enslaving me?)

Singularitarians believe that the human level of technology will grow exponentially into the far future, and that the rate of growth will itself grow, until our technology level goes from doubling every 2 years, to doubling every month, to doubling every second, to doubling every instant, until we shoot off to a technology level of positive infinity.

(Techno-libertarians love the idea that technology has well-defined levels, e.g. the Kardashev scale.)

This will be accomplished in practice by building God. God, usually named “Omega” by the crowd Yudkowsky hangs out in, will be an AI that is as far beyond us in intelligence as we are beyond ants, because intelligence is also a thing that comes in levels. God will be self-modifying, which means that God’s AI intelligence level will zoom off to infinity, allowing God to complete the Singularity in seconds and transform the world into His domain via an Earthly Rapture that spreads over the surface of our planet at the speed of light. He will then resurrect everyone who ever lived, by collecting all the tiny random bits of entropy and unwinding time so that He can know how our brains were wired at the moment of our deaths, and then He will create new bodies for us, either physical or in virtual reality, and we will live in Utopia forever after, with God as our benevolent dictator.

Yudkowsky thinks that there is a very large risk that we will create God by accident, in which case He might be Evil because there are more value systems that we might build Him with that we would come to see as “evil” than value systems that we would come to see as “good”. And his reason for posting those original blog posts on Overcoming Bias, and later founding the Less Wrong website, was to get a bunch of people together to figure out how to Build God so that He would be Good instead.

He wrote Harry Potter fanfiction so that more people would help him build God.

Opposite of how I found out about Yudkowsky, I know when I woke up from the cult, but I can’t pinpoint the circumstances. I was pretty much fully immersed in Less Wrong ideology from around 2007-ish to around 2012, and I had completed most of the self-deprogramming by around 2014.

  • One of the big reasons for it was probably that my mental health had tanked, as increasing work stress caused my unhealed PTSD wounds to re-open. The unethical behavior demonstrated by his fanfic protagonist was really starting to grate on me, as was the fact that Yudkowsky hadn’t realized that, contrary to the in-story denials, his Harry had been traumatized by emotional abuse at home, and that a lot of Harry’s behavior that Yudkowsky considered reasonable but I saw as upsetting was really reflecting on the fact that Yudkowsky himself probably had untreated PTSD, too, and was writing too much of himself into his Harry.
  • There were the ongoing references to rape as being no big deal, which appeared yet again in the Harry Potter fic, and were fairly upsetting to me as someone who was raped as a child by an adult family member.
  • There was the fact that I had started reading Homestuck, which in many ways was a much healthier story about characters dealing with childhood trauma, neglect, and abuse.
  • There was the fact that I had started to realize that his non-profit organization, the Machine Intelligence Research Institute (MIRI) and formerly the Singularity Institute for Artificial Intelligence (SIAI), was a scam he was using to receive a paycheck for writing Less Wrong and his fanfic rather than doing any real AI research. (I wish I had realized that fact about $7,000 earlier.)
  • There was the fact that he kept saying things about what intelligent beings are capable of that were patently ridiculous, based if nothing else on my own education in computer science and having a deeper understanding than him of computability and of classical information theory.

Leaving Less Wrong was fairly easy for me, because my PTSD actually came in handy for once: due to social avoidance, I had never participated in any of the in-person Bay Area meet-ups, and so I had no personal relationships with Yudkowsky or with anyone else who frequented the site.

I got lucky. A lot of people like me, people who got swept up in it but were more vulnerable, ended up getting treated rather badly. In addition to a surprisingly deep undercurrent of homophobia and especially transphobia, the movement had connections to Jeffrey Epstein, several of the people involved in the Rationalist leadership have raped underage teenagers, and the non-profits at the center of the community screened potential employees and volunteers for whether or not they would be willing to defend the community against true accusations. (Source)

The worst part is just how influential the Rationalism / extropianism / singularitarianism ideology has been in the tech industry. It’s everywhere. A big chunk of the reason why Bitcoin, Ethereum, and NFTs got so popular? A lot of Bay Area venture capitalists are connected to Rationalism, often via PayPal alums like Peter Thiel and Elon Musk. The reason why Mark Zuckerberg was pushing virtual reality, to the point of rebranding his company “Meta”? He believed that brain uploading was going to happen within his lifetime, as an alternative path to immortality versus cryonics, and he wanted to have a world ready where he could upload himself at the death of his physical body, while keeping his wealth. The reason why OpenAI and chatbots are so big right now? Because CEO Sam Altman is a true believer who thinks he’s building Omega, the future Robot God-Emperor of Mankind.