Mark Schroeder on Comparing the Weight of Reasons

Imagine that you have reasons to study (A) and reasons to watch a movie (B), but cannot do both. (A) and (B) are each sets of reasons in favor of studying or watching a movie (A might contain the facts that you will improve your grade, improve your reputation among professors, and feel good about yourself). You want to know whether the set of all reasons to study (A) is greater than the set of all reasons to watch a movie (B).

Schroeder believes that (A) is greater than (B) iff, in proper deliberation, it is correct to place more weight in set (A) than (B). The correct amount of weight to place in (A) and (B) is determined by further sets of reasons, (A)* and (B)*, which are the sets of the right reasons to place weight in the reason sets (A) and (B) respectively. (This description is slightly different than Schroeder’s, which I’ll explain later on).

I imagine this language is confusing, so consider how it applies to the study/movie case. (A) contains reasons to study, such as the fact that my grade will improve. I can then ask what reasons I have to put weight (or importance) into the fact that my grade will improve. Perhaps the reason to place weight into the fact that my grade will improve is the fact that improved grades will better advance my career. So the fact that improved grades will better advance my career is a reason in set (A)* to place weight in one of the reasons of set (A).

Once this deliberation is completed, we have two sets (A)* and (B)* which give us competing reasons to put weight into (A) and (B). Whether we should do (A) or (B) thus depends on whether (A)* or (B)* has greater weight; to figure out whether (A)* or (B)* has greater weight, we then need further sets of reasons (A)** and (B)** which determine how much weight to put into the reasons of (A)* and (B)*. As is, the account creates a deeply problematic regress where we need set (A)*** to place weight into (A)**,  set (A)**** to place weight into (A)***, and so forth.

To solve the regress, Schroeder proposes a base case where one set of reasons outweighs another without need for any further sets of reasons. If successful, the base case stops the regress from happening, and the results of the base case can be applied at each level before it until we eventually figure out whether (A) or (B) is greater. (If (A)** is the base case, then (A)** being greater than (B)** would tell us that (A)* is greater than (B)*, which tells us that (A) is greater than (B).)

Schroeder’s proposed base case is simple: a set (X) outweighs a set (Y) if (X) contains reasons and (Y) does not. So a set of reasons always beats an empty set. Note that this minimal claim is substantive (it does not follow trivially from any definitions), but it seems rather intuitive.

Schroeder believes that the regress for any decision will eventually end up at this base case, so no infinite regress. His view depends on two principles:

1. (X) is greater than (Y) if (X) contains reasons and (Y) does not. (Base Case)

2. (X) is greater than (Y) if the set of reasons to prioritize (X) over (Y) is greater than than the set of reasons to prioritize (Y) over (X). (Regress-starter)

The clever part of Schroeder’s account that I want to focus on is that the further set of reasons in principle 2 is defined in terms of a comparison between (X) and (Y), rather than on the merits of (X) or (Y) independently. I first want to show why this move is crucial for Schroeder’s account to work, and then I will discuss my objections.

Note that my first description of Schroeder’s account is slightly inaccurate (I intentionally put it that way both for simplicity and to better draw out my following point). If (A)* provides the right reasons to place weight in (A) independently of (A)’s relationship to (B), then the regress will never reach the base case unless we discover that we really have no reasons to do (B). (A) has weight because (A)* has weight, (A)* has weight because (A)** has weight, etc; similarly, (B) has weight because (B)* has weight, (B)* has weight because (B)** has weight, etc. (A) and (B) each have weight (they are both sets of genuine reasons) only if each subsequent set also has weight; if some set down the line has no weight, each set in the chain leading up to it will not have weight either. So long as (A) and (B) are genuine sets of reasons, we know that every set in the chain will also have weight.

This is problematic because we cannot gain information about the relative weight of any two sets simply from knowing the fact that all reason sets are greater than 0 (knowing the Base Case). Imagine that you have twenty six variables A-Z, which are all positive integers. Since you know that they are all positive integers, you know that they are all greater than 0 – however, this tells you nothing about how individual variables like B and D compare to each other. Likewise, we cannot derive the relative weights of any sets (A), (B), (A)***, etc. from the Base Case.

Schroeder’s account avoids this problem by making further sets only include reasons to prioritize (A) over (B) or prioritize (B) over (A), rather than put weight in either independently. This is crucial because (B) can have weight even if there are no reasons to prioritize (B) over (A); conversely, it is not possible for (B) to have weight if some subsequent set (B)* places no independent weight in (B).

Applied to the original case, we can ask: what reasons do I have to prioritize good grades+professor relationships over movie enjoyment+leisure, or vice versa. Say I have some reasons to prioritize good grades because doing so will advance my career better, and I have some reasons to prioritize enjoyment because I will enjoy it more in the short term. In this case, we have further sets (A)* and (B)* which provide reasons to prioritize (A) over (B) or prioritize (B) over (A).

We can then push the chain back further – what reasons do I have to prioritize my long-term career over short-term enjoyment, or to prioritize short-term enjoyment over my long-term career? Say I have good reasons to prioritize my career, but no reasons to prioritize my short-term enjoyment. We now have the base case with (A)** and (B)** – since I have no reasons to prioritize my short-term enjoyment over my career, (B)** is empty. Since (B)** is empty and (A)** is not, (A)** is greater; thus (A)* is greater than (B)*, and (A) is greater than (B).

Hopefully it’s clear how comparative sets of reasons avoid the problems of independent reasons. If (A) and (B) are continually judged independently, each subsequent set of reasons to place weight in those sets will be non-empty, and the base case will never be reached. If subsequent sets of reasons are determined comparatively, we can consistently say that there are no reasons to prioritize the reasons in (B) over (A) (or further down the line, no reasons to prioritize (B)* over (A)*) even though both (A) and (B) have weight.

A simple case to illustrate this point: Imagine that the reason to do (A) is that (A) will produce 10 units of happiness, and the reason to do (B) is that (B) will produce 5 units of happiness. While I both have reasons to produce 10 units of happiness and 5 units, I only have reasons to prioritize 10 units over 5 units (I have no reason to prioritize 5 units over 10 units). In this case, (B)* is empty even though (B) is not, so we can use the Base case to generate an answer to the choice of (A) or (B).

My objection to Schroeder is that the regress will end up in the Base case only if we can compare all reasons on a single scale. Consider a choice between saving my wife (C) and three innocent people (D). I have reasons to prioritize my wife (her special relationship to me) and reasons to prioritize the three innocents (promote the general good). So both (C)* and (D)* are non-empty. I have reasons to prioritize special relationships over the general good, and reasons to prioritize the general good over special relationships, making (C)** and (D)** non-empty.

This case is unsolvable in Schroeder’s account because I am using values that cannot be completely outweighed by other values. There is no point during deliberation where I lack reasons to prioritize my relationships or the general good, because neither can be fully compensated for by some component of the other. The Base Case needs a choice to eventually have no reasons in favor of it, which is possible only if we can compare different values on a single scale (like the utilitarian principle) to guarantee that one eventually has no reason in favor of it.


Realist and Relativist Theories of Value on the Significance of Conscious Beings

Premise A: In a universe without conscious beings, nothing would be valuable.

Both value realists (people who believe values are an objective part of the world) and relativists (people who believe values are relative to one’s perspective) can accept Premise A. There’s good reason to accept Premise A, because it seems intuitively obvious that nothing would really matter in a universe where nothing is conscious, and thus, nothing can feel pleasure or pain.

Premise B: In a universe with conscious beings, some things are valuable.

While the realist and relativist will have very different explanations for Premise B, both can accept the version of it that fits with their definition of value. The realist thinks that there are objectively valuable things in a universe with conscious beings, and the relativist thinks that conscious beings will value some things in this universe.

The point I want to address concerns the possible justifications for accepting both A and B. Why does the introduction of conscious beings change a universe from one in which there is no value, into one in which there is value?

The relativist has an easy, straightforward explanation: the introduction of conscious beings creates value because the conscious beings are doing the valuing. Without conscious beings, there is no one who could value anything.

The realist can offer a slightly more difficult explanation: the introduction of conscious beings creates value because conscious beings are the only things of intrinsic value. In a world without conscious beings, there are no intrinsic values, so there are no values; in a world with conscious beings, the beings themselves are intrinsically valuable and other things are instrumentally valuable to the extent that they affect conscious beings. This story can be run in multiple ways. Perhaps the realist is a utilitarian who claims that only pleasure felt by conscious beings is valuable, so everything in the universe is valuable only to the extent that it affects the pleasure and pain of conscious beings.

I argue that the relativist has a comparably better explanation for why values are introduced to a universe when conscious beings enter it. To make this argument, I want to imagine a case where we transition from a universe with conscious beings to one without.

So, imagine that you are the last conscious being in the universe, and there is no possibility that any conscious beings will ever exist in the future. You are about to die, leaving the universe a consciousness-less mess. But, you come across a machine that grants you insane power to change the entire structure of the universe – whatever you code into the machine, it will make the universe in that exact manner. You can create planets, galaxies, or even change the laws of nature. There are only two limitations – you cannot create anything conscious, and the machine will not take effect until after you die.

What would you do with this machine? Personally, I would go crazy – I would create jello worlds, change some galaxies to have completely different physics, have pilot-less spaceships going every which way, and all sorts of other absurdities. I hope you would also choose to do something, because it would be tragic to waste such an opportunity.

Back to the issue: if you’re a realist who accepts both A and B, the effects of the machine mean nothing to you. Since you’ll die before any changes and there are no conscious beings left that could be affected, there is no value to anything the machine can do. It does not matter whether the universe stays the same or becomes an absurd one, because a value difference between the two would entail that one affects conscious beings in a way that the other does not, which is impossible in a universe with no conscious beings.

The relativist has no such problem. There is a value difference between a jello universe and a regular universe because you can value one more than the other. There is no need to find a feature of each universe that has value once you die, since that uses the realist conception of value. To be clear, the relativist cannot claim that there is a value difference between the two universes once you die – rather, the relativist can only claim that you value these two universes differently before you die even though you know it will not affect any conscious beings.

The realist cannot make the same move. If a realist values these two universes differently, there must be a valuable feature to one that the other does not have – so long as the realist accepts Premise A, this move is impossible. Perhaps the realist claims that the value of coding the machine lies in the pleasure you get from thinking about how crazy the universe will be once you die (so the value of the difference is instrumentally tied to the intrinsic value of your pleasure). While this is a partial explanation, it seems to leave something important out – namely, that I really would care whether the machine actually makes the universe the way I coded it. I would not want to just experience the pleasure of thinking about it, I want the universe to actually change in the way I want.

To summarize, the relativist has a comparably better explanation than the realist for why the machine has value, given that both accept Premises A and B. The relativist only needs to show that you value some difference independently of its effect on conscious beings. Conversely, the realist needs to show that there is some value in one universe not present in the other, which cannot be the case if the realist accepts A and B. Realists in general can avoid this conclusion either by 1. bite the bullet and deny the value of the machine (a position that might be common, as most other grad students I talked to about this believe that the results would not matter), or 2. reject Premise A, and argue that there are some intrinsic values that do not depend on conscious beings. I argue that the relativist has an advantage because, intuitively, both A and B are correct, but the results of the machine still seem to matter.


A Technical Approach to Moral Error Theory

1. All moral claims are false (common interpretation of moral error theory)

2. “X is wrong” is a moral claim

3. It is not the case that “X is wrong” (1, 2)

4. X is morally permissible (3, Deontic Entailment)

5. “X is morally permissible” is a moral claim

6. It is not the case that “X is morally permissible” (1, 5)

4 and 6 directly contradict each other, so if every move made is legitimate, the common interpretation of moral error theory is false (in logic, you can disprove any premise by deriving a contradiction from it). I want to go over different strategies error theorists can use to reject this argument, ultimately setting on two solutions I think best solve it (there are two solutions because, as I will argue, there are two problems).

First, we could change the definition of moral error theory to “all moral claims that entail the existence of moral properties are false”. This move is particularly appealing if premise 4 is left out –

(Formulation of 1st problem)

1. All moral claims are false

2. “X is wrong” is a moral claim

3. It is not the case that X is morally wrong (1, 2)

4. “It is not the case that X is morally wrong” is a moral claim

5. “It is not the case that X is morally wrong” is false (1, 4)

This form of argument is problematic for any type of error theory, moral or not, that rejects all claims within a discourse. The solution to change the first premise works well here – if premise 1 is changed into “all moral claims that entail the existence of moral properties are false”, premise 5 does not follow from the other premises.

But, there is still a problem if we accept the Deontic Entailment principle (term borrowed from a professor of mine):

Deontic Entailment : If “X is not morally wrong”, then “X is morally permissible”.

This principle is motivated by solid intuitions. “Morally wrong” is generally interchangeable with “morally impermissible”. If X is not impermissible, it seems to follow that X is permissible. Given this principle, however, error theory runs into a major problem even with the more nuanced definition:

(Formulation of 2nd problem)

1. All moral claims that entail the existence of moral properties are false (nuanced definition of moral error theory)

2. “X is wrong” is a moral claim that entails the existence of the moral property of wrongness

3. X is not morally wrong (1, 2)

4. X is morally permissible (3, Deontic Entailment)

5. “X is morally permissible” is a moral claim that entails the existence of the moral property of permissibility

6. It is not the case that X is morally permissible (1, 5)

You might notice how premise 3 is phrased “X is not morally wrong” instead of the original “it is not the case that X is morally wrong”. Some error theorists might object here, and differentiate between two types of negation : rejecting a predicate assigned to a subject, or rejecting the sentence entirely as mistaken. Consider the sentence “the present king of France is bald”. I could reject the predicate assignment, and claim that “the present king of France is not bald because the present king of France has a lot of hair”. Or, I can reject the sentence entirely by claiming that, since there is no present king of France, there is no subject that could be bald or not bald.

This strategy, applied to the 2nd problem, would argue that premise 3 does not follow because the type of rejection entailed by premise 1 is the entire sentence rejection. So, I can only infer that “it is not the case that X is morally wrong”, not that “X is not morally wrong”. But, this strategy seems mistaken to me – rejecting the entire sentence implies that the subject itself is in question. However, “X” clearly can exist since we can plug real actions into the variable. And it seems that moral error theorists specifically want to deny the predicate assignment “is morally wrong” to any such X. So the strategy would not work if we started the argument with the claim “X does not have the property of moral wrongness”, and it seems that every moral error theorist would have to accept that claim.

Instead, I think the correct strategy is to reject the Deontic Entailment principle. If X is not morally wrong, it does not follow that X is permissible. Since morally wrong and impermissible are interchangeable, this seems to be a straight contradiction – how could anything be not impermissible and not permissible? However, once the claims of moral error theory are better understood within the context of criticism of moral realism, I think the problem clearly goes away.

Moral realists can divide actions into categories based on their permissibility. Some actions are wrong, some are permissible but not required, while others are obligatory. If the latter two groups are joined, we get a neat divide where all actions are either permissible or impermissible. If all actions fall into one of the two groups, then a negation of one entails the other. Basically, if I know that all actions are either impermissible or permissible, and I know that this action is not impermissible, I can validly infer that the action is permissible.

Recall the nuanced definition of moral error theory – all claims that entail moral properties are false. Both “moral permissibility” and “moral impermissibility” are moral properties, and error theorists should reject any claims that entail either property. In order to consistently deny these claims, error theorists must reject the neat divide of all actions into either the “permissible” or “impermissible” groups. Basically, moral error theorists should claim that no actions fall into the categories of “permissible” or “impermissible”. If actions do not fall into either category, a negation of one does not entail the other.

So, the solution to the original argument is two-fold: error theorists only need to reject moral claims that entail moral properties, and error theorists should reject the Deontic Entailment principle. Neither solution is sufficient to tackle the original argument on its own – the 1st formulation does not require the Deontic Entailment principle, and the 2nd formulation has the nuanced definition of moral error theory.

(This post follows an interesting discussion I had with Adrian Scholl and a professor of ours. The arguments described owe a great deal to their input, and the distinction between the two formulations of the problem comes from Adrian.)


The Argument from Authority (Fixed)

(I previously made a post titled “The Argument from Authority”, where I tried to explain my argument against Parfit’s version of moral realism. The argument was inspired by a long conversation I had with some other graduate students. I tried to capture each point we talked about, but this made the post confusing and I don’t think anyone besides me understood what I was talking about. So, I’m deleting the old post and writing a new, hopefully more clear, version of my argument from authority.)

Are there inescapable reasons for us to act morally? In On What Matters, Parfit argues that moral reasons exist and necessarily apply to all people – no matter who you are or what you care about, you have reasons not to murder other people. 

I want to separate Parfit’s position into two claims: 1. If “M” exists, then everyone has reasons to act morally, 2. “M” exists. 

Standard arguments against realism challenge claim #2:

-Mackie’s Argument from Disagreement: “M” is unlikely to exist, because widespread disagreement on moral issues is better explained by the hypothesis that morality is a human construct affected by social structures than the hypothesis that people are incorrectly perceiving moral truths. 

-Mackie’s Argument from Queerness: “M” is unlikely to exist, because “M” would have to have properties quite unlike anything else in the observable world (it must have the property of creating/imposing moral reasons)

-Street’s Darwinian Dilemma: “M” is unlikely to exist, because our evaluative views are better explained by an evolutionary story claiming that certain evaluative views provide advantages for reproduction or survival, than by a story that explains the survival of evaluative views with objective values. (Street argues against value realism, which claims that values are a real part of the world. I don’t think she would want her argument to be used against moral realism, since she believes many features of morality can survive the rejection of value realism. I’m including her argument because the values Street rejects are precisely the type of values moral error theorists reject, and many error theorists believe common moral realism is committed to value realism)

As I see it, these three arguments against realism are all concerned with the metaphysics of morals. Each argument focuses on whether it is rational to believe moral values exist, similar to how a metaphysical argument about God would focus on whether it is rational to believe God exists. 

I don’t want to make a metaphysical argument against Parfit. It’s too difficult a task to prove that something does not exist, and our empirical inadequacies make it easy to settle on “well, M either exists or it doesn’t, we just don’t know if it does”.

Instead, I want to focus on claim #1: If “M” exists, then everyone has reasons to act morally. This claim is a hypothetical, so it concerns the relation between M’s existence and inescapable moral reasons. Basically, would the existence of “M” sufficiently create/constitute inescapable moral reasons? Note that “M” is a meaningless variable – I do not want to commit to a specific definition of “M”, since realist views differ from one another and I want my argument to apply to all externalist moral realists. 

My Argument from Authority challenges the hypothetical claim for any and all possible definitions of M. I claim that, no matter what realists substitute for “M”, the consequent of the claim does not follow (there will still not be inescapable moral reasons). I defend my position with an appeal to practical reasoning and the claim that we are each the final judges in our own practical reasoning. When deciding whether I have a reason to X, it is ultimately up to me whether the consequences of X provide reasons for or against doing X. This view is voluntarist (our reasons are created by decisions of the will). 

Think of the ability to satisfy questioning in the practical reasoning process as a type of authority – only an authority on reasons could claim that X is a reason because the authority endorses X. For my own views, I am the only authority I recognize. No other person, thing, or entity could satisfy my question “Why should I do X?” with a “because I endorse X” answer. I argue that other people are capable of taking the same stance (though obviously not required to, because that would contradict my views). 

Recall the hypothetical claim – If “M” exists, then everyone has reasons to act morally. Understood in the context of practical reasoning, the hypothetical claim holds that if “M” exists, there is an authority outside of us that can satisfy the “Why should I do X?” question. I deny the possibility of this claim being true. No matter what might exist, nothing outside of me could constitute an authority over my practical reasoning process. This claim can be presented as a challenge to moral realists: make up any entities you want, and you still won’t be able to explain why that entity has authority over my practical reasoning process. 

Here’s an easy example of how the Argument from Authority would respond to a particular realist view. Consider Divine Command Theory, which claims that what is right just is what God commands. So, God has an authority over us to determine what we should do. Like arguments against moral realism, arguments against Divine Command Theory can take the metaphysical route: for reasons XYZ, it is unlikely that God exists. My argument avoids the metaphysical debate about whether God exists and focuses on the hypothetical: “If God exists, then what God commands is right”. I deny the hypothetical claim – even if God exists, God’s commands do not have any authority over my practical reasoning process. If God wants my evaluative views to match God’s views, God needs to cheat by threatening hell or promising reward. Without threats/rewards, God’s opinion on what is valuable is just another opinion like my own.  


Sharon Street’s Darwinian Dilemma

Street argues against value realism, or the view that values are an objective part of the world (values exist independently of human desires). Most moral realists are value realists, though ambitious versions of moral constructivism can have moral realist traits while rejecting value realism.

The Darwinian Dilemma asks value realists: is there a relationship between the evolutionary forces that guided our evaluative views and the values that exist independently of those views? To set up this dilemma, Street only assumes that our evaluative views have been influenced by evolutionary forces, an assumption I believe is undeniable. Near universal evaluative views (murder is wrong, don’t steal from others, cooperate with neighbors, etc.) have a clear advantage over the alternatives (murder is obligatory, steal everything, etc.). The alternative beliefs may be realized throughout our history, but it’s not surprising that people with cooperative beliefs had greater reproductive success than those who isolated themselves from others.

Value realists have only two options in the dilemma: either there is a relationship between evolutionary forces and evaluative truths, or there is no relationship. If there is no relationship, it is highly unlikely that our evaluative views have evolved to correctly identify objective values. Evaluative views persist if they are compatible with survival and reproductive success, and it is a matter of chance whether those views mirror objective evaluative truths. Given the vast number of alternative evaluative views, the likelihood of evolutionary forces guiding us to objective evaluative truths with no relationship to those truths is similar to the likelihood of hitting a target when shooting in the dark with no knowledge of the target’s location.

Value realists who choose the other horn of the dilemma (there is a relationship) face a different problem. These realists accept that our evaluative views have been influenced by natural selection, but believe that value realism is part of the evolutionary explanation. Perhaps people with the ability to track objective values had a reproductive advantage over those without the ability. Street considers this approach to be a scientific approach (using a theory to best explain observable data) and critiques it as such. Using scientific standards, the tracking theory fairs poorly compared to the standard evolutionary story. Both theories agree that our evaluative views evolved because they aided reproductive success and survival. The tracking theory claims that our evaluative views aid survival because they track truths (and learning truths aids survival), while the evolutionary story claims that our evaluative views aid survival because they cause people to respond to their circumstances in evolutionary advantageous ways (those who avoid danger because they value survival are better able to survive, those who refrain from harming others are better able to avoid harm from others).

The evolutionary story offers a far simpler explanation for the evolution of our evaluative views and does not unnecessarily add unobservable entities to the equation. The tracking theory, as well as any other evolutionary story that includes objective values, cannot offer a comparably simple explanation and requires an unnecessary assumption about objective values. Given these choices, the evolutionary story meets scientific standards far better than the tracking theory.

Street discusses different replies a realist could offer (there are plenty of replies since realism is the predominant view in metaethics). One of these replies claims that rational reflection plays an important role in the link between evaluative views and objective values. Perhaps our initial views are guided by evolution (and have no direct link to objective values), but we are able to use rational reflection to adjust those views to match the objective values. Street offers a decisive counter – rational reflection is not an emotionless tool (where we can cooly reflect and judge our evaluative views). When we reflect, we must have some starting point to judge our options. Without initial values, we cannot make any comparisons between our options. Rather, when we reflect, we must already have certain evaluative views that we use to judge other views (such as using the view that all humans are equal to judge the view that Americans have natural rights that foreigners do not). If rational reflection leads us to objective values, then the values used by rational reflection must already mirror those values, bringing up the original problem of the dilemma.

I admire Street’s argument because it clearly frames the debate: value realists can only take one of two stances in response to the evolutionary story, and both stances entail major problems. Street’s argument is empirical (focusing on the likelihood of value realist theories being the correct explanation of observable data), and I believe it adds nicely to other anti-realist arguments, such as the argument from queerness (a metaphysical argument about the nature of values) and my own argument from authority (a practical argument appealing to our decision making process).


Storytelling as a Moral Fictionalist

Moral fictionalists pretend there are certain moral rules they have to live by, even though they believe there are no such rules (fictionalists are error theorists). Fictionalists pretend there are moral rules because there are benefits to doing so. Fictionalism initially sounds a bit cheap – error theorists can’t refer to moral laws (a perceived weakness of the view), so some of them pretend rather than believe.

The best interpretation of moral fictionalism (derived from Richard Joyce’s work) is selective. Fictionalists accept the moral rules that benefit their life, but have no reason to live by moral rules that are harmful. They don’t need to commit to an entire moral system.

Two reasons fictionalism is useful: 1. It’s easier to talk about normative issues using moral language, and 2. It’s easier to regulate passing desires. The second reason is more interesting to me. Joyce describes a good example (it’s easier to exercise if you pretend you have to reach your goal), but it works in a wide range of cases. I use moral thoughts like “I have to get this draft done by tonight” all the time.

Those particular examples aren’t standard moral cases, but the same reasoning works applies. I’d rather say “I must not murder” than “It is irrational for me to murder”.

The criticism of fictionalism might be that it uses an egoistic justification for pretending to act morally since it depends on personal benefits. But fictionalism is not committed to an egoistic account of practical reasoning (the way we determine what we should do, or what is rational for us to do). I have no issue making moral claims on myself for the benefit of others (like “I have to help them move”).

Storytelling as a moral fictionalist loosely describes the process of being a fictionalist. The story you invent (moral language you use) is up to you, since fictionalists can reject any moral language other people use that they do not agree with.



From Political Philosophy to an Argument for Moral Error Theory

Consider this political view: governments should not restrict their citizens from pursuing a vast number of different, beneficial causes. 

Few people would deny the previous claim (they just disagree on particular causes they think should be restricted). Society, as a whole, is better off when people are free to pursue different goals from each other. This includes morally important causes: animal protection, food donations, emergency relief funds, etc.

Imagine that Marcus volunteers at an animal shelter due to his deep concern for animals. He is a vegetarian because he thinks animals are too valuable to kill for food, and it is morally important to Marcus to be a vegetarian. Lucy, a volunteer at a food bank, often provides meat as food to people who need it. While Lucy likes animals too, she thinks it is far more important to take care of people so she has no problem with eating meat.

Marcus and Lucy’s case is similar to the standard argument from disagreement used in moral error theory. As Mackie puts it, moral error theory offers a better explanation for these cases (different views and backgrounds lead to disagreement) than moral realism (at least one side is imperfectly perceiving moral truths). The standard reply to this argument is that lack of agreement to an answer does not prove there is no objective answer. This reply is necessary — for moral realism, there has to be an objective answer even if it is too vague for us to conclude.

Consider the political view’s implications again: when talking about society as a whole, we easily endorse people pursuing different causes because they have different values. Applied to Marcus and Lucy’s case, we shouldn’t criticize either for their views even though they are incompatible with one another. However, if the moral realist is right, either Marcus or Lucy is wrong about whether people have a moral obligation not to eat animals. Further, one of them might be obligated to choose a different lifestyle (perhaps Marcus could help people a lot more by leaving the shelter, or Lucy is aiding the unjustified murder of animals).

The easiest way to consistently endorse both Marcus and Lucy’s lifestyles is to be a value pluralist (there are different intrinsic values that do not derive their worth solely from independent ends). Both people and animals are valuable for different reasons, so it is easy to empathize with someone who heavily commits to one of those values.

The easiest way to be a value pluralist, especially in the context of this case, is to be a moral error theorist. A moral error theorist denies that there are objective moral values that obligate us toward them. If values are solely subjective, it is easy to explain why some people pursue one cause over the other. It is even easier to explain why we endorse both of their moral characters despite them having contradictory views, since neither is “right” and we value both of their causes.

Of course a moral realist could offer a different interpretation of the case. Perhaps Marcus and Lucy both are justified in living good lifestyles and are not obligated to do the “best” option. Whether eating animals is justified is vague because what is important is promoting the good, and animals have a vague instrumental relation to that good.

My argument isn’t that the case, or subsequently value pluralism, proves that moral error theory is correct. Similar to Mackie, I am arguing that moral error theory just offers a more plausible explanation of the case than moral realism.

-This argument is for people who are more comfortable with political philosophy than metaethics. I tried to work my way from an intuitive political principle to moral error theory because it helps some people start from a position they’re comfortable with. I’ve found that often when I talk about moral error theory, people have a hard time understanding the approach metaethicists take. So hopefully this works as a way to relate to a mostly obscure theory.



Comparing Ethics to Taste

My views on metaethics can be roughly summed up with the following four claims:

1. Moral error theory is correct. (Common moral discourse consistently asserts the existence of objective moral values, and those objective moral values do not exist)

2. Our subjective values still give us reasons to preform or refrain from certain actions.

3. We are capable, to some degree, of changing our subjective values.

4. It is often rational to have more subjective values because they make life more meaningful.

I like using personal taste (particularly for food) as a comparison to understand how this concept of ethics is supposed to work. The similarity is in form, not content — the reasons we have to maintain or change our subjective values are not the same as the reasons we have to maintain or change our tastes.

The similarity in form between values and taste lies in the process I advocate for each. We enjoy our food more if we enjoy how it tastes, so our tastes help us choose which foods to eat and which to avoid. However, we are capable to some extent of changing or developing our tastes. We may not like a food the first time we eat it but can develop a taste for it if we keep trying it. I’ve personally experienced this with a lot of foods I currently like — avocado, beer, wine, onions, peppers, etc.

When choosing what foods to eat, we have more options and a higher possibility of enjoying our food if we like many different tastes. Developing new tastes expands our options — when I started enjoying onions I was able to have more types of food and more flavors within foods I already liked.

Our tastes often change without our direction, as the way we taste food changes as we get older. But there are still some times when we are capable of liking a new food that we would not like without effort. When these options are available, you should try to expand your tastes in order to get more enjoyment out of eating.

I believe our subjective values help us choose which actions to perform. Further, as with taste, I believe we are capable of changing or developing our values. When choosing what action to do, we have a greater number of meaningful options and a higher possibility of preforming a meaningful action if we have many different subjective values. If you care about a lot of things, your actions will be more meaningful than they would be if you are apathetic and care about very little.

The process for changing subjective values is difficult to understand. I don’t know if it’s possible to value something solely through a decision — if I’m sitting on my couch and decide that rocks in Asia are very valuable, are they really valuable to me? If my actions do not change and I am not moved at all by the prospect of acquiring rocks from Asia, I do not seem to genuinely value those rocks.

The reason why it is difficult or impossible to decide to value rocks in Asia is because our subjective values depend on our biology to some extent. Our biology makes it easy to value pleasure more than pain and to value our own species more than exotic insects. We are social animals with certain needs and chemical structures, and we cannot simply choose to change those facts. This holds true with our taste for food — our biology and the chemical structure of foods explain why sugar tastes sweet and salt does not. We cannot just choose to have everything taste sweet regardless of what it is made of.

However, as with taste, there seem to be methods to expand your subjective values in ways that are not determined solely by your own biology. It is much easier to care about the educational needs of children if you are involved in the effort to improve education. By becoming involved, you enable yourself to care about something that you might have been unable to care about otherwise.

The reason why I stated that the content of each is different despite their similar structure is because the value of tastes seems to depend on the pleasure they provide. It makes sense to enjoy more food because it is good to enjoy things, so the expansion of tastes is instrumentally valuable in promoting the intrinsic value of pleasure. For subjective values, there is no independent intrinsic value that all your subjective values derive their worth from. If there were such a root value, there would be no real expansion of subjective values — we just find new ways to achieve pleasure or some other psychological reaction. I use the term “meaning” to describe the motivation behind adopting new values because it captures the concept of value that I want. It is not, however, any particular psychological reaction.

The comparison of ethics to taste hopefully shows that we might have a reason to expand our subjective values when possible. I can keep trying dark beer in order to like the taste; I can get involved with the local community in order to really value helping those around me. Moral realists might think this comparison trivializes the importance of moral values, particularly if they hold an authoritative concept of moral law (Kantians/Divine Command Theory). I personally argue that there is no possible way for values to be more important (building objectivity into it adds nothing). However the comparison is really aimed at moral error theorists — I want to show that subjective values can do a lot of work, and we can maintain some useful moral concepts without committing to problematic objective values.


My Argument for Atheism (Directed towards Agnostics)

I’ve argued before that the problem of evil is unsolvable for a God who is omniscient, omnipotent, and omni-benevolent (http://fensel.net/2011/10/17/why-the-common-conception-of-god-is-impossible/ and further explained in http://fensel.net/2012/04/17/two-logical-notes-on-debates-about-god/).

To get around the problem of evil, a theist or agnostic might not be fully committed to all three omni traits. Maybe God is imperfect, and would rather us suffer from our own choices than have a perfect world. Or perhaps God is perfectly good, but doesn’t have the omnipotent power many theists believe in.

The question is then: what evidence is there for the existence of this type of God? I believe it is plainly obvious that no such evidence exists. I assume most theists believe there is some evidence, though that is a different debate than the one I want to focus on. For agnostics, the general stance is that there is no evidence for or against God. I argue that, if you accept this claim, atheism is the only logical conclusion.

To be clear, atheism is not committed to the view that there is a 0% chance of any type of God existing. We have imperfect minds and very limited knowledge of the universe. It is logically possible that some type of God exists (just not one that has the omni traits). All an atheist needs to argue is that the probability of God existing is very small, and that it is irrational to believe in such a small probability.

Take the agnostic’s starting position: there is no evidence for or against the existence of God. The first issue is that it is impossible to find evidence of a being’s non-existence when that being is not confined to a certain location (Zeus can be disproved by going to Mt. Olympus, God has no such home to be investigated). So, we should be looking for evidence of existence, not non-existence.

If no such evidence is found (as the agnostic starting position holds), the question is then what the chances are that the being exists. It is a huge mistake to think that, since there are two possibilities, there is a 50/50 chance of existence. Take any 4 made up beings: Xenu, the Flying Spaghetti Monster, Ra, and Galactus. If we conclude from the lack of evidence that each has a 50/50 chance of existing, then there is over a 90% chance that at least one of them exists (chance of non-existence: .5 ^ 4). But this is clearly wrong.

The agnostic might just claim that there is no way to determine the probability of God existing. But we do not need to be able to define an exact percentage chance to have a judgment — all we need is a general ballpark for the probability.

Here’s a simple way to think of the probability of God existing. God is a possible entity that could or could not exist. Without any evidence that makes God more likely to exist than other possible entities, God has the same chance of existing as any other possible entity.

So, what are the chances for a possible entity actually existing? In the most basic form, the chances can be defined as: (total number of actual entities that exist)/(total number of possible entities that could exist). To see why this works, imagine somehow being able to see a complete list of all possible entities. You throw a dart and it hits the name “Entity X”. The odds of that entity existing can be determined by figuring out how many of the names on that list actually exist, and dividing that number by the total number of names on the list.

Although we clearly do not know the number of actual or possible entities, we can understand the ratio. The amount of possible entities is at least close to infinite. The number of actual entities, while it may be much larger than we currently can comprehend, will not come close to the infinitely large number of possible entities. The ratio of actual to possible entities is thus incredibly, incredibly small.

The argument for atheism can thus be summarized as:

1. There is no evidence for God’s existence.

2. Without evidence, God is no more likely to exist than any other possible entity.

3. The chance of a randomly selected possible entity existing is very, very small.


4. The chance of God existing is very, very small.

Although this seems like a bold conclusion, it really is intuitive once you remove the unjustified “a lot of people believing in it makes it more likely to be true” intuition. Imagine a being that is like a jellyfish, has telepathic powers, and flies around bringing justice to criminals in some very distant galaxy. While this being is possible, the chances of it actually existing are incredibly small. Same for God — if you make up an entity without evidence, the chances of it actually existing are incredibly small.

Responses to two possible moves made by the agnostic:

1. Distinguish between “logically possible” and “compatible with the laws of our universe”. Perhaps God is not just logically possible, but falls into the more exclusive category “compatible with the laws of our universe”. Does this make the chances of God existing significantly higher?

-While it is a more exclusive category, “compatible with the laws of our universe” is still an almost infinitely large group. So, even if God falls under this relatively smaller category, the chances of God existing are still very small.

2. The “multiple paths up the same mountain” approach. What if God is not just one possible entity, but a multitude of possibilities? Ie, God exists if any one of these thousand versions of God exists.

-Making the term “God” denote a multitude of possible entities multiples the chances of God existing by the number of possible entities God covers. But, even if you multiply it by thousands or even millions, the number is still incredibly small — to think otherwise vastly underestimates the number of possible entities that could exist.


The Basic Problem of Direction of Fit

The problem of “direction of fit” concerns the different relationships beliefs and desires have to the actual world. The justification for a belief depends on its correspondence to the real world — if a belief does not accurately reflect the real world, the belief has a defect. Conversely, the justification for a desire does not seem to depend on correspondence to the real world — if I desire a certain state of affairs that is not realized in the real world, I take the view that there is something less than ideal about the real world, not that there was a defect in my desire.

This distinction between desires and beliefs is Humean in nature, and I generally accept the Humean account. Beliefs cannot motivate without a corresponding desire (and thus information about the state of the world cannot motivate without a corresponding desire).

Given a certain desire, I believe the point is correct: failure to actualize the desire does not show that it was wrong to have the desire in the first place. The point I want to elaborate on is how we choose between desires, and whether this process has any “correspondence to the real world” elements in it.

Consider the desire to ride dragons through the air. In most cases, there is no reason to abandon this desire — it is fun to imagine riding dragons, and we could even modify the desire to “if there were dragons in the world, I would want to ride them”. However, there is a problem if this desire is one of your fundamental (or most important) desires, one that you would consider character-defining. Imagine waking up each day with the hope of riding a dragon, and each night going to bed devastated at your failure to do so. In such a case, I argue that there is a defect in the desire that is directly relevant to its failure to correspond to the real world.

The crucial point is whether or not desires are “given” in the context. If I cannot choose what I desire (as many people believe), then there is no point to call the desire to ride dragons defective. It is unfortunate that my desires cannot be realized in the real world, but there’s nothing I can do about it.

On the other hand, if I can choose what I desire, perhaps it is more rational to choose desires that could be realized. Choosing a desire can mean multiple things: creating a desire (I now desire to eat salad where I previously did not), creating a value (I value eating salad, so I want to be motivated to eat salad even if I do not desire it), or identifying with a desire (I both have a desire to eat salad and a desire not to, I identify with the desire to eat salad because it is what I, on reflection, believe I should do). I do not believe the first sense is possible, we do not seem to have the power to desire whatever we want. I argue that the latter two senses are possible: we can choose what we value, and we can choose a desire to identify with among our existing desires.

My claim is that it is more rational to choose to value (or identify with) a desire that has the possibility of being realized in the actual world. Desires that can be realized have a capacity for value that is directly relevant to your choice. When choosing what to value, the capacity for realization is one consideration (among others) that is directly relevant to determining how rational it is to adopt a certain value.

-The argument for my claim depends on a much longer argument I’ve previously discussed. In effect, the argument is: when choosing an entire system of values, the most rational thing to do is choose the system of values that has the highest capacity for value realization. This follows from the claim that the rational perspective is one that necessarily values what it values, and thus the rationally optimal life is one that can achieve the highest amount of this value. The capacity for value is determined by a combination of your individual nature, which values you are capable of adopting, and the state of the actual world. In normal language, the argument is: life has more meaning when you care about things, and the more you care about the more meaning you can find in life. This relationship is determined by what you naturally care about, what you are capable of caring about, and whether the actual world can be changed to fit what you care about.