•Common Types of Consequentialist Goals
1. Pluralistic
2. Classical / Hedonistic
3. Desires / Preferences
4. Objective List
•Common Objections to Utilitarianism
-Bernoulli's Paradox / Cowen's Double or Nothing
-Infinite Ethics
-Counterintuitive Counterexamples
-The Epistemic Problem
-Arrow's Impossibility Theorem
-VNM utility functions are not intrapersonally comparable
-The Aggregation Problem
-Rule Utilitarianism is Basically Deontology
-Nozick's Experience Machine and Utility Monsters
-Parfit's Repugnant Conclusion and Heroic Death
-Irrational preferences
-Preferences of the unconscious
-P-zombies
•Suggestions for Utilitarians
-Consider preferences for rational agents and well-being for irrational agents
-Think in terms of sets of actions
-Stick to deontic moral heuristics
-Because of intractability, avoid utilitarian calculation and use more decentralized mechanisms
-Remember Harsanyi’s two theorems
-Think on multiple levels of ethics, for example, have distinct ethical principles for corporations than you would for individuals
•Kantianism is not Pareto-efficient
In the Original Position, some possible ex-ante Pareto improvements violate Kantian rules.
Furthermore, there exists possible Kaldor-hicks improvements that can be turned into ex-post Pareto improvements, but violate Kantian rules.
If Pareto efficiency must be part of our moral theory, then Kantianism is out of the picture.
By modus tollens, if Kantianism is to be accepted, then Pareto efficiency is not to be a requirement for ethical theories.
This argument can be applied to other forms of deontology, but it's best applied to Kantian deontology because of its infamous inflexibility.
•When Optimal Theories Change
Under some ideal social contract, rational egoism may be desirable. If under a state of Warre, not so much.
For example, egoism may be preferable in a state where externalities are internalized. If otherwise, altruism might be more desirable.
Thus our preferences for distinct theories of ethics differ based on the given social contracts and incentives.
•"Everything" Can Become Consequentialism
Almost all normative theories of ethics can be thought of as some social welfare function that ranks outcomes. It just so happens that these functions may violate non-dictatorship, Pareto efficiency, and other desirable characteristics.
•Eren's Repugnant Conclusion
(A) One extremely happy mortal agent
(B) One somewhat happy immortal agent
(C) Many very happy mortal agents
If A is preferred to B, and B preferred to C, then A must be preferred to C to obey the VNM axioms of rationality.
•One Argument for Agent-Neutrality
1. We should be temporally neutral
2. Because of Parfit's work on identity, temporal neutrality and consistency implies agent-neutrality
3. We should be consistent
4. Thus, we should be agent-neutral
•The League of Rational Irrationals
There exists a League of Rational Irrationals that causes agents to have better outcomes if they were irrational. Assuming rationality implies picking better outcomes, it would be irrational to be rational.
•Selfless Selfishness
An outcome where everyone was selfish may be better than an outcome where everyone was selfless.
•The Better Drowning Child Analogy
You're next to a body of water with plenty of adults nearby. There are many drowning children in the water, but it's still much less than the number of people who can immediately save them.
Collectively, if a fraction of the people went in the water, there would be no children drowning, but one individual can only save a few kids at best. If everyone jumped in the water, that would be a waste of time and resources better spent elsewhere.
Some people save the children, but most people do nothing. What should be done?
•How Theories Can Be Indirectly Self-defeating
(a) The agent who believes the theory has mistaken non-normative beliefs
(b) The agent's belief in the theory reduces their utility wrt to the theory
(c) The agent having the theory's given motives makes the agent worse at satisfying the theory's aims
•How Theories Can Be Collectively Self-defeating
(a) The theory prefers an outcome where agents don't collectively subscribe to the theory over an outcome where they do (think Prisoner's Dilemma, Tragedy of the Commons, etc.)
(b) If the attempts of agents to achieve the theory's aims make the aims worse achieved
•Parfit’s 5 Mistakes in Ethics
1. The “Share-of-the-Total” View.
2. Ignoring the effects of sets of acts.
3. Ignoring small chances.
4. Ignoring very small effects on very large numbers of people.
5. Ignoring imperceptible effects on very large numbers of people.