Against Utilitarianism
Contents
Since it was first formalized in the mid1800s, Utilitarianism has dominated discussions of human behavior in philosophical ethics and in economics. In particular, it acts as the foundation of “rationality” that defines Homo economicus, the idealized human being in economics which forms the benchmark against which actual human behaviors are compared, because it is (purported to be) the only ethical framework which is “rational”. Specifically, Utilitarianism is “VNMrational”, which means that it’s the best possible system that satisfies the axioms of rational behavior defined by the Von NeumannMorgenstern utility theorem.
In this article, I purport to demonstrate that Utilitarianism does not actually make sense in relation to human values and is not actually a description of rational behavior, and I formally propose an alternative consequentialist theory that escapes the VNM theorem’s erroneous axioms.
NB: This post benefits from an understanding of as many of the following fields as you can manage: philosophy (emphasis on normative ethics), economics (emphasis on Game Theory), and computer science. Unfortunately, these three fields do not come into close contact very often, which is (I believe) why the ideas I’m introducing in this post are arriving about 100 years later than they should’ve.
It takes crossdisciplinary understanding to truly make sense of the world.
Remember: the idea of sorting a dictionary in “alphabetical order” — by comparing the first letters of a pair of words, then the second letters, and so on — had to be invented, in 1604 by Robert Cawdry, and then taught to people. It was originally a type of unique knowledge specific to a single discipline of study, not known outside of its esoteric specialty, but you can hardly imagine an adult today who doesn’t know how to sort a list of words, or how to look up a word in a dictionary, now can you?
This post is part 2 of an ongoing series.

Previous post: Human Morality and AI

Series: Building AGI
What is Utilitarianism?
Utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and wellbeing for all affected individuals.
Utilitarianism developed out of the protoeconomist ethical philosophers of England and Scotland during the late 18th and earlytomid 19th centuries, most famously Bentham and Mills. It was closely related to the Enlightenment movement, which was an enormous influence on the founding ideologies of the United States of America and on that country’s current system of government, and relied strongly on the concept of eudaimonia (“good/true spirit”) from Classical Greek philosophy: the idea that there is a single thing called “goodness” and that all you have to do is choose it.
To use modern vocabulary and concepts, Utilitarianism holds that there is some quantity, called utility, which represents the sum of all moral good produced by the outcome of an action, and that these utility quantities can be ordered relative to each other, so that there is a best possible outcome.
(Or technically a set of best possible outcomes, if some of the outcomes are equal under the utility relation.)
Mathematically, this means we can assign to each outcome a member of ℝ (the
set of real numbers) and create a function out of it, named U
, so that we
get either U(x)<U(y)
, U(x)=U(y)
, or U(x)>U(y)
for each and every
pair of possible outcomes “x” and “y”.
This comes with a caveat, however: since we’ve only defined U
in terms of
the order of the resulting numbers, U
itself is subject to two
isomorphisms: uniform translation (adding a constant), and uniform
positive scaling (multiplying by a constant). Basically, if someone hands you
a card with a formula for U(x)
written on it, and another card with a
formula for a*U(x)+b
written on it, you can’t tell which one is which. This
means that absolute values of U
are undefined, and it also means that the
scale of U
is similarly undefined.
(The VNM theorem’s Wikipedia page calls out this oftenignored limitation.)
That’s worth reiterating: it’s total nonsense to ask if U(x)≥0
, or to
say that U(x)=U(y)+1
. It’s like trying to ask how many kilograms of the
color blue can be extracted from the abstract concept of “love”. It’s not
that you can just pick an arbitrary answer to those questions and see if it
works out. It’s that, if you try, you get paradoxes — you can prove that some
things are both true and false at the same time — because U(x)
and
a*U(x)+b
are exactly the same thing. If one thing is isomorphic to
another, it isn’t just “hard to tell them apart”, it means there is only one
thing but there are multiple, equivalent ways to write it down, and you
can use whichever one you like and even change your mind in the middle of
doing so.
A lot of this blog post is going to be about those two isomorphisms, and why they mean that most of what gets called “Utilitarianism” is mathematical nonsense, and what remains is definitionally inapplicable to humans.
Beyond the preceding, Utilitarianism also holds that there are no additional moral considerations for any action except those of the action’s outcome. This assumption is called “consequentialism”, and I agree with it, so I won’t delve any further into it. My proposed replacement for Utility, to be introduced shortly, is also a consequentialist theory.
In philosophical ethics, this numerical quantification of “utility” is most
often defined to be directly proportional to the total human happiness in a
given configuration of the world or summed up along a given timeline, with
higher utility indicating higher total human happiness. The most rational
policy of action, then, is to take those actions which lead to the outcomes
with the highest possible utilities: that is, to label outcomes by the actions
that lead to them, to cast utility as a true mathematical function U(action)
with a domain consisting of “the set of all possible actions” and a codomain
of ℝ, and then to maximize that function using the methods of calculus.
In economics, the numerical quantification of “utility” is traditionally
defined in a slightly different way: “utility” is held to be directly
proportional to the happiness of the individual making the decision in a
given outcome, while taking into account that the individual’s personal
happiness may depend in some way on the happiness of others. The formulation
is otherwise identical, with each individual person casting their own utility
as U_person(action)
and choosing the action or sequence of actions which
leads to the outcomes with maximum U_person(action)
.
What are some existing critiques of Utilitarianism?
The formulation of Utilitarianism that is used in philosophical ethics has some nightmarish thought experiments that make it clear that it breaks down under certain limits. These have often been used by critics of Utilitarianism (primarily deontologists) as supposed examples of why consequentialism itself is monstrous.
Suppose, as seems to be the case, that there is an upper limit on how happy a person can be in a given moment. (We will show shortly that it gets worse for Utilitarianism if the contrary is true.)
Furthermore, suppose that each person in the world has a happiness function
U_person(action)
— with alternative spelling U(actionperson)
, pronounced
“utility of action
given person
” — that captures that person’s happiness
over a worldline as a real number. (If the person is unsure as to which
worldline will result from an action, the utilities of the worldlines which
can result from an action are each weighted by the probability that said
worldline will actually come into existence due to the action, and then
summed.)
Then the most obvious formulation of utility follows as:
U_sum(x) = k Σ [p ∈ all living people] U(xp)
That is, we might then define the total happiness of all people in the worldline stemming from an action to be the arithmetic sum of the happiness of each individual person in that worldline.
Under U_sum
, the optimal world is one where there are infinitely many
people… even if those people are living lives with barely any happiness in
them, so long as the expected value of U(worldperson)
is any arbitrarily
small positive real value. According to U_sum
, overpopulation to the edge
of universal starvation is a good thing, so long as U_sum>0
.
But wait! We said earlier that U(x)
and a*U(x)+b
are the same thing
(“isomorphic”)! This demonstrates that U_sum
has violated our requirement
that utility functions have translation isomorphism, since we don’t have any
meaningful way to say what the “zero point” of the utility function is; it can
only be used for relative ordering.
Well, the other obvious formulation of utility is:
U_avg(x) = k 1/(# living people) Σ [p ∈ all living people] U(xp)
That is, we might instead define total happiness to be the arithmetic mean of each individual’s happiness.
Under U_avg
, it follows that we should kill those people who bring the
average happiness down, or otherwise remove them from consideration as part of
the world such that their utility no longer contributes to the function
output, so long as we can do so without any additional suffering. It also
follows that if we obtain the ability to modify people to increase their
happiness artificially, e.g. by giving them drugs or by performing surgery on
their brains (a practice called “wireheading”), then we should do so until all
people are maximally happy at every moment of their lives.
But all we’ve really done is normalize U_sum
so that the utility numbers
fall into the range from [H,+H]
for maximum happinessperperson +H
and
minimum happinessperperson H
; we didn’t actually get rid of that pesky
violation of translation isomorphism, because zero is still special.
The economic formulation suffers both of the above problems, and more beside,
depending on how each individual defines their personal utility function
U_person
. Some individuals might gain happiness from seeing another
person’s happiness go up or down, and the economic formulation provides no
particular explanation for why the resulting feedback loop, which turns
utility into a nonlinear differential equation, doesn’t invalidate all of our
conclusions about Utilitarianism, since those conclusions presuppose the
linearity of utility. (This post doesn’t cover that scenario, but I’m
planning to address agents with preferences about the happiness of other
agents in my third post.)
All three formulations also admit utility monsters, which are agents who
define their U_person
with a different scaling factor, such that their own
happiness is objectively more important than the happiness of others. Thus we
see that both isomorphisms, uniform translation and uniform scaling, are
being violated by our attempts to aggregate utility between people, despite
those isomorphisms being a fundamental requirement of how weak our starting
assumptions were.
Some authors, motivated by the economic formulation and attempting to fix the
“kill ’em all” conclusion of U_avg
, have also attempted to include dead
people in the calculation, but that leads to additional violations of common
sense morality and other absurdities. We will explore this in greater depth
momentarily.
But there’s something else that’s wrong with Utilitarianism
The deepest flaw of Utilitarianism is this: it presupposes that there is some
function U_person
from actions to real numbers in the first place. Or
rather, that there is one such function that can be privileged above all
others as capturing a person’s “utility” in some absolute sense.
A mathematical function is, at its core, a mapping from elements of an input set (the “domain”) to elements of an output set (the “codomain”). Traditionally, the input and output sets are both ℝ, the set of real numbers, but you can do calculus on any function that is continuous, and we would like to do calculus because it’s very hard to figure out how to maximize or minimize a function if calculus is offlimits.
To be “continuous”, you need the following properties for your function:

The function must be total: the function must be welldefined for each and every possible input
x
such that there is exactly one outputf(x)
. 
Both the input and output sets must come with an attached distance measurement, called a metric. A set with a defined metric is a “metric space”. There are some rules that the metric must obey, most importantly the triangle inequality (
A+B≥C
for all distances A, B, C), but the metric does not need to be Euclidean. 
The distance between two output points
f(x)
andf(x+Δx)
must approach 0 asΔx
approaches 0. This implies that, for any two set elements, there must be an infinite number of set elements between them, and this must be true of both the domainx
and the codomainf(x)
. We’ll call any metric space with an infinite number of elements between any two elements a “continuous space”.
With regards to U_person
, we can safely assume that the set of all actions
is a continuous space, because it doesn’t help Utilitarianism to assume
otherwise. And we know that ℝ is a continuous space, because the real numbers
are the simplest possible continuous metric space.
However, that is not enough to make U_person
a continuous function!
U_person
is not total because there are discontinuities in it.
In particular, some outcome inputs do not define a value for U_person
because the person under consideration is dead. It does not make sense to ask
U_Dave
, “How happy is Dave?”, in outcomes where Dave is dead. If Dave is
dead, then Dave is neither happy nor unhappy because his corpse is not an
agent and therefore is not capable of happiness or unhappiness. We might be
happy or sad about Future Dave’s death, and Present Dave might be happy or
sad about Future Dave’s death, but Future Dave himself does not care either
way because corpses do not care about anything. Only agents care, but
corpses have no agency.
Sometimes you can fix a discontinuous function to make a continuous function, but most such “fixes” break analytic properties like smoothness that are important for performing calculus.
(Remember: our longterm goal is to maximize U_person
or U
using calculus!
We care about smoothness and other such analytic properties, because otherwise
we will have a very hard time figuring out which actions to recommend. If we
fix U
in a way that breaks its analytic properties, we might have a working
set of equations, but it won’t be very useful in the real world because we
won’t be able to ask it questions about what to do next.)
Sometimes, the discontinuity exists at only a single point, called a
“singularity”. A classic example of such a function is sinc(x)=sin(x)÷x
,
which is continuous and smooth everywhere except at x=1
where it is
undefined. But the limit as you approach x=1
is 1
, no matter whether you
approach the singularity from above or from below, so simply declaring by
fiat that:


is sufficient to create a sinc
like function that is both continuous and
smooth.
U_person
is not quite like that, however. Adjacent to each outcome where
Dave is dead are an infinity of other outcomes where Dave is also dead; Dave’s
death is a cutoff line — or plane, or hyperplane, depending on the structure
of the outcome space — beyond which U_Dave
is undefined no matter how far
into the future you integrate along the worldline. Depending on the series
of events that leads to any one particular death, Dave might assign to that
cutoff a limit of positive infinity, negative infinity, some finite real
value, or no value at all, because (again!) our definition of “utility” was so
minimal that we agreed to never ask questions that could distinguish U(x)
from a*U(x)+b
(our isomorphism requirements). Our utility functions are
only identified by the order in which they place outcomes, not on the
numerical value for any one outcome.
The fact that we are being forced to assign a numerical value at all means
that our formalism has already broken down, possibly beyond repair. There are
now situations where Utilitarianism can no longer uniquely order two arbitrary
actions, because there is no one unique way to remove the discontinuity from
the U_Dave
function and repair the formalism.
Fundamentally, what’s happening can be traced back to the very origins of Utilitarianism:
Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do.
— Jeremy Bentham, “An Introduction to the Principles of Morals and Legislation”
Why should we expect that a single continuous mathematical function U
should
be able to capture both pain and pleasure, weighing them against each
other and measuring them on the same scale, and yet somehow not depending on
which arbitrary scale we use to quantify either one?
And what if there are two or more types of pleasure, or two or more types of pain, such that no amount of Pleasure1 is worth even a tiny amount of Pleasure2, or avoiding any amount of Pain1 is worth enduring any amount of Pain2?
A brief detour into computer science
Computer science is a branch of mathematics that deals with what is computable. That is, computer science is about discovering which questions are exactly solvable by mathematicians using pen and paper. The concept of computability actually predates the existence of the first working computers, as it was described independently by Alonzo Church and Alan Turing using different but equivalent constructions in the mid1930s roughly a decade before the construction of the first electromechanical computer, ENIAC.
Computers are a practical result of computer science, not the motivation of it.
It is intimately related to the mathematical philosophy known as Formalism, which was David Hilbert’s pet project brought to its peak by Bertrand Russell and Alfred North Whitehead in Principia Mathematica, a project which ultimately ended in failure when it was proven that there were certain mathematical truths that could never be proven (in a famous pair of theorems by Kurt Gödel). A duality was later established between the proving of theorems and the computation of algorithms, firmly connecting Gödel’s work to that of Church and Turing in both directions.
Algorithms are theorems, and theorems are algorithms.
In the centuryish since its origins, computer science has taken the concept
of a function into realms that were previously considered hopeless for
mathematicians to understand, giving mathematicians a language for describing
very sophisticated functions that are neither continuous nor smooth, yet are
straightforward to study using discrete mathematics, a field that was mostly
ignored prior to the 20th century except for the subbranch known as Number
Theory. We have already seen a very primitive version of such a function
above, with our surgery on the sinc
function: we split the input space into
two subsets, and then computed the overall function as one of two subordinate
functions depending on which subset contained the current input element. In
the language of computer science, we might represent such a function using the
following syntax:


A different kind of function
I previously mentioned that Utilitarianism is undone by conflating pain and pleasure, all kinds of pain and all kinds of pleasure, onto the same scale: the Greek mistake of eudaimonia. In this section I will explain how to correct that.
We will presently derive a new formalism — a new objective ethical framework for comparing two outcomes and selecting the ideal, with each person responsible for placing their own subjective moral values into the calculation — that remains consequentialist but that does not suffer from the failure modes caused by reducing outcomes to a single real number.
For the moment, I will reduce the number of moral values to two, one positive
(“pleasurelike”) and one negative (“painlike”). However, the framework
generalizes to any number of them, and the user of the framework is free to
place them into any order they see fit, including to interleave them so that
we might have Pain1 > Pleasure2 > Pain3
and/or Pleasure1 > Pain2 > Pleasure3
.
Let S_person(action)
represent the degree of pleasure that a given person
experiences while living on a worldline leading to a given outcome,
quantified as a real number such that increasing numerical value represents
increasing satisfaction. (Positive is good.)
Let T_person(action)
represent the degree of pain that a given person
experiences while living on a worldline leading to a given outcome, again
quantified as a real number such that increasing numerical value represents
increasing pain. (Positive is bad.)
(In both cases, we will soon define a sensible meaning for the zero value, which will eliminate that pesky assumption of translation isomorphism. This will help us later, when we want to identify the actions that maximize our “pleasure” functions and minimize our “pain” functions.)
Now define P_person(a0, a1)
to be the following function that maps from
elements of the Cartesian set product Action × Action
to elements of the
finite set {1, 0, 1}
:


This function can now be used to represent the given person’s relative preferences between two outcomes, using capabilities that extend far beyond what a traditional algebraic function can offer.
We can then construct an algorithm that, given a list of n
possible
outcomes, efficiently calculates the set of outcomes which are preferred above
all others but equally preferable amongst themselves.


As we mentioned above, it is entirely possible that P_person
would better
approximate the actual moral preferences of a real individual by using more
than two algebraic functions, flipping the sign from “positive is bad” to
“positive is good” each time we need to. The actual content of our best
function doesn’t change at all when we do this.
Spreadsheet Sort
If you’re having trouble following along because you’re not versed well enough in computer science, there’s actually a very easy realworld explanation for this algorithm: we just reinvented multicolumn spreadsheet sorting! All we’re doing is putting one outcome in each row in the spreadsheet, with columns “A” through “Z” representing our individual moral preferences for that outcome, and then sorting so that each column takes sorting priority over the columns to the right of it.
If your moral values are as simple as “I will sacrifice any amount of pleasure to avoid pain”, then “Pain” is in your personal column A, and “Pleasure” is in your personal column B, and all we do is sort by Pain ascending, then by Pleasure descending, and then pick the row at the top of the spreadsheet. Boom, that’s our action. It’s not actually that much more complicated than a utility function, but it’s impossible to write it down using algebra, so we had to get into some fancier math than you’re used to. That’s all.
Why is this better?
First and foremost, from a mathematical analysis perspective, this resolves
several of the absurdities that previously cropped up when comparing or
aggregating utility quantities. In particular, we no longer have to worry
about that pesky uniform translation isomorphism, because now we only
ever care about comparisons between two outcomes at a time; we’ve hidden the
actual use of S(actionperson)
and T(actionperson)
deep inside
P(action0,action1person)
, which only ever compares the relative values, so
we are no longer tempted to ask questions that don’t have reasonable answers.
Additionally, the uniform scaling isomorphism is gone: we don’t care about the
order of S_person(x)
vs S_person(y)
itself anymore, like we did for
U_person
, but instead we care about the order of the quantity
(S_person(x)S_person(y))
. This lets us define the meaning of the numeric
values a little more tightly than what utility functions allow, and as a
consequence we can now use different uniform scaling constants for
S_person
vs T_person
vs all our other moral preference functions. Because
of this, we are no longer required to invent answers to absurd questions like
“if the joy of drinking a milkshake is J
, and the horror of a child being
tortured to death is H
, how many milkshakes need to be added to the world to
cancel out the torture of the child, such that H=k*J
for some positive
constant k
?”. Utilitarianism posits that there is a genuine, rational
answer to that question, but we can now see that it’s nonsense: the scaling
isomorphism now exists for individual preference functions, not for their
aggregate. We no longer expect there to exist any such finite constant that
can convert units of one function to units of another, because each one has
its own independent scale.
As a bonus, we have also eliminated the absurdity that was previously inherent in some outcomes having “negative absolute utility”. An individual moral preference function can have a negative value, but that just means that the outcome under consideration is less preferred than those outcomes for which the preference function is larger. If we want to, we can pick a value of zero that makes sense for each function in isolation, and then negative values are simply outcomes that are worse than the zero outcome. When we were trying to force everything into the utility function framework, we found ourselves forced to pick a universal ZERO™ that makes sense for all of the components of our utility function, all at the same time.
And since we can now define each function’s zero point sensibly without
disturbing the actual ordering provided by P(action0,action1person)
, we can
now use those zero values to represent outcomes in which the world has never
contained any agents and never will. This does not describe our own world or
any future timeline of it, but it does describe worlds that could have
existed as branches off from past worlds. Doing so helps us fix our analysis
problem — letting us pick actions that minimize or maximize our moral
preferences because, remember, there are actually an infinite space of
possible actions, not just the finite lists that we’ve been dealing with so
far — and we can then use these “dead worlds” as the limit of those actual
future worlds where all people are dead, but died in a way that caused neither
happiness nor suffering (nor any other moral objection that our preference
functions might care about). After all, physics says that the universe must
end in a dead world, one way or another.
I can hear some of you already, but no, cryonics/uploading/functional immortality doesn’t change this. The heat death of the Universe is still going to happen either way. There’s no such thing as living forever, so we can’t just say “boo death, all deaths bad!”, or else we’ll end up with every preference function going to negative infinity if you take a long enough view of the timeline, in which case you should take some advice from Discworld’s Death and leave early to beat the rush.
And since all of these fixes come automatically from using the correct
mathematical formalism, the moral paradoxes that were produced by violating
Utilitarianism’s required isomorphisms are completely gone. No more
distinction between U_sum
vs U_avg
, no more Judgement of Solomon where we
are forced to pick which half of the baby we get to keep. The answers just
make sense.
(I’m glossing over the process for how the preference functions of multiple
agents can interact. In particular, the philosopher’s perfectly objective,
observerneutral U(action)
doesn’t exist in this formulation anymore, but
agents can adopt one another’s preference functions as new “columns” in their
own personal “spreadsheets”, which models both coalitionforming and
compassion. Again, that’s content for the third post.)
Conclusions
Do humans actually follow this system?
I believe that most do. Furthermore, I believe that those who do not are objectively incorrect, in the sense that they are making a logical mistake. No matter how each person defines their personal, individual, and subjective moral preference functions, this is the mathematically correct way to combine those moral preferences in order to choose actions that satisfy them best.
(I also believe that the moral preference functions are mostly, but not entirely, objective as well — okay, not the functions themselves, but the mathematical form they take — because they derive from the frequently recurring Game Theoretic scenarios that evolution places on social animals, particularly on those animals that parent and nurture their children. At the very least, I expect all evolved animals to obey them automatically, even aliens. But that argument is detailed in the previous post of this series.)
Beyond using this ethical framework to try to understand how realworld humans do their ethical calculations, I believe this framework also explains why Paperclip Maximizers show up so often in the study of economics and in the study of artifical intelligence: most artificial agents, intelligent or not, are built around maximizing some sort of utility function. I argue this: All Utilitarians Are Paperclip Maximizers, and All Paperclip Maximizers Are Utilitarians. There is no such thing as a utilitarian agent that does not maximize paperclips.
It’s impossible for someone who follows this ethical framework to successfully negotiate with a utilitarian AI, but only because it’s impossible for anyone to successfully negotiate with a Utilitarian AI — even if you yourself are another Utilitarian AI. There will always be scenarios where a Utilitarian finds the prospect of “cheating” to be too good to pass up. By contrast, an agent following this set of rules is capable of restraining itself voluntarily, with no extra Decision Theory stapled on afterthefact. In my third post, I intend to show how parties that subscribe to this ethical framework can sometimes build a shared moral framework amongst themselves, even if their own personal moral preferences do not exactly align (so long as they aren’t directly opposed).
Rationality
Back at the start of the article I mentioned that, in economics, it’s commonly asserted that “rationality” is the same thing as “VNMrationality”, i.e. that anything you call “rationality” must satisfy the VNM axioms, and that the provably best solution to the situations described by the VNM axioms is Utilitarianism.
I reject axiom 3 (Continuity), which claims that, if some trio of random
lotteries L
, M
, and N
are preferred in order L < M < N
by a rational
actor, then there is necessarily some probability p
so that
pL + (1  p)N ≈ M
.
In English: if you make a metalottery maybe L or maybe N
, then there is
some way to weight the odds of the metalottery so that L
is rare enough
and N
is enticing enough that you’re forced to conclude that the
metalottery is just as good as M
. And if you don’t, it’s because you’re
being irrational.
This is patently absurd. We can see this if L
includes the possibility of
prison, but M
and N
do not.
 L
 A degenerate lottery with 100% chance of being sentenced to life in prison without parole
 M
 A degenerate lottery with 100% chance of winning $1,000,000
 N
 A degenerate lottery with 100% chance of winning $1,000,001
(A degenerate lottery is simply a lottery with only one possible outcome.)
The Continuity axiom claims that, if you are presented with a choice between
the lottery M
and the metalottery pL+(1p)N
, there is some probability
p
small enough that you will prefer the metalottery (either $1,000,001 or
prison) over M
($1,000,000 with total certainty). That you will risk prison
for Just. One. More. Dollar.
And what’s more, we can shrink the payout of N
from $1,000,001 to
$1,000,000.01, or even further to any value even infinitesimally larger than
M
’s payout, even $1,000,000 plus a single teaspoon of common beach sand with
dog poop in it, and there will still be some p
that will convince you to
risk prison for that extra teaspoon of dog poop sand (which you could surely
sell to someone, you conclude rationally).
I reject the very idea that economic “rationality”, VNMrationality, is actually in any way rational, or that it is a good model for actual human behavior (even in the Homo economicus limit). Importantly, my proposed ethical framework does not obey VNM axiom 3, and therefore the VNM theorem’s conclusion does not apply to it.
Ultimately, this discrepancy around the Continuity axiom is because Utilitarianism does not admit the existence of irreversible consequences. In a world where prison sentence could be bought and sold in a free market… a world where a prison sentence was just an extreme negative score… a penalty that you could rid yourself of by convincing someone else to buy it from you after you got stuck with it… in that world, it would be rational to take the gamble. But we don’t live in that world. Some decisions have irreversible consequences, and the assumptions underlying Utilitarianism, the axioms which define it into existence, deny that such consequences could ever exist.
It also doesn’t help that continuity is physically impossible in a Universe governed by quantum mechanics. Of course you cannot divide “utility” into infinitely tiny parts. You can’t divide anything into infinitely tiny parts. Continuity is only ever an approximation of reality, never a correct representation of it.
A point of comparison is Arrow’s Theorem, which proves — yes, proves, honest to modus ponens — that any “voting system” must violate one of three commonsense fairness criteria. And yet, its axiomatic definition of what counts as a “voting system” is so restrictive that Range Voting does not count as one. Range Voting is the system where each voter gives a score on some fixed scale, such as 0 to 10, and the choice with the highest average score wins. It’s a system, and you use it to vote, but it’s not a “voting system” because it violates the axioms of Arrow’s Theorem. Thus it turns out that Range Voting does permit all three fairness criteria, in apparent but not actual contradiction of the theorem, because Arrow’s Theorem applies to “voting systems” and not to voting systems.
In particular, the axioms underlying Arrow’s Theorem define a “vote” as a relative ordering of the options without any additional information about the strength of the voter’s preferences. Arrow thought his theorem said that fair votes were impossible with three or more choices, but what it really said is that fair votes are impossible if you throw away the strength of the preferences and only keep the relative ordering. Likewise, the VNM theorem states that Utilitarianism maximizes the rationality of your behavior if you are a being whose preferences are continuous… but humans are not, because humans can experience irreversible consequences, and therefore Utilitarianism is not maximally rational for humans.
Unfortunately, Eliezer Yudkowsky’s famous “Sequences” on his website Less Wrong have perpetuated the mistaken belief that a theorem can remain valid in situations when the axioms are violated, i.e. that truth can flow backward in a theorem from the conclusions to the axioms. That’s not how logic works. It’s one of the most common logical fallacies, in fact. The weirdest part is that many of the same people who make this mistake, who use the VNM theorem to argue that humans must adopt Utilitarianism to be rational, correctly see the same argument as bunk in the case of Arrow’s Theorem, even though the two situations are very strongly analogous. You don’t find people^{1} arguing “oh, but all voting systems lead to dictatorships, so if Range Voting doesn’t create dictatorships then it’s a moral failing of Range Voting not being naturally Dictatorshippy like a legitimate and morally valid voting system must be”. And yet.

who aren’t Curtis Yarvin or Peter Thiel, noted advocates of dictatorship as a system of government ↩︎