Why Moral Requirements Aren't Reducible to Desires
Taking a closer look at the issues facing a "desire reductionist" view of moral requirements.
To begin
I’ve recently encountered a rather odd view of moral requirements and reasons, that reduce them to the desires of an individual (I’ll refer to this view as the reductionist view).
For instance, under this view, if I were to say “Jane has a moral requirement to save drowning children”, it would just mean “I have a desire that Jane save drowning children”. I’ve also seen proponents of this view similarly reduce reasons to desires as well.
I have many issues with this view. For one thing, it saps requirements and reasons of any normativity (their essential feature!). For another, it seems to have a hard time accounting for why it’s improper to say that non-agents (like dolphins and babies) have moral requirements. Similarly, it has a hard time accounting for why ought would imply can. I discuss all three of these, and more, below.
Sapped of Normativity
One implausible entailment of this reductionist view is that our own desires–on their own–can determine that someone else has a moral requirement to do something. Brian having a desire for Jane to save drowning children determines that Jane has a moral requirement to save drowning children.
The issue is that such a requirement is empty of authority. There is no, normative force, behind it. Whilst it’s very plausible that our own desires can ground reasons to do things–we can appreciate some oomph behind them–it’s not so plausible to think these same desires can give moral requirements to others on their own. For instance, suppose someone has a desire that I choose a certain flavour of (vegan) ice cream. If we suppose that I don’t care to do what they want me to do (and there are no other considerations on the table) it doesn’t seem that I’ve got any reason whatsoever, no requirement whatsoever, to do what they want me to do. It seems such a requirement would be completely isolated from me.
I think it ultimately comes down to the view that I endorse; things like reasons and requirements always have some normative weight to them. That is, they bear down on what I should be doing in a way that is legitimately authoritative. Things like other people’s desires (at least on their own) lack any weight to them regarding what I ought to be doing–they (their desires) lack legitimate authority over me. It’s exactly like how cultural conventions and rules (at least on their own) have no legitimate authority over anyone. They give us no reasons, in and of themselves, to do anything. What’s needed is some further considerations before this can be done, such as our own desires (regarding what we individually have reasons to do) as well as things like objective moral requirements. For example, we may have a moral requirement to show respect to others in accordance with their culture (so long as it doesn’t contradict other elements of morality).
Non-Agents & Moral Requirements
Presumably we want non-agents to help others in need. For instance, if I saw a child drowning, and there was a dolphin nearby, I would want the dolphin to help the child. However, presuming dolphins aren’t moral agents, it would be a mistake to say that dolphins have any sort of moral requirement (or reason) to save the child. Only agents ought to respond to reasons and requirements. However, this poses a problem for those who wish to say requirements are the same sorts of things as desires.
If talk of moral requirements and reasons is equivalent to talk about our desires, then it would not be strange or mistaken to declare that non-agents have things like moral requirements to save drowning children. However, clearly this would be a mistake–non-agents don’t have moral requirements. But–and here is the crux–in saying non-agents don’t have moral requirements, we aren’t suggesting that we lack any desires for them to act in certain ways. But the reductionist view doesn’t seem to be able to make sense of this.
Ought Implies Can1
We can sometimes desire that others act in ways that is impossible for them to act in. We want people to fly up to the top of a burning building to save those in need, even if they possess no ability to fly. However, if our talk of moral requirements is just equivalent to talk about our desires, then it would not be mistaken to say that this person has a moral requirement to fly up and rescue these people. But, presumably, we would not say this individual has any such moral requirement. But–and here is the crux–saying that this person has no such requirement is not to suggest that you have no desire for them to help those in need. Once more, the reductionist view doesn’t seem to be able to make sense of this.
An Implication of Accurate Requirement Talk
I think it’s common sensical to believe that if someone says “you have a moral requirement to X”, and what they say is true, then it also implies that your own statement “I have a moral requirement to X” is also true.
For instance, suppose someone tells me that I have a moral requirement to save children, and what they say is true. This seems to imply that my own statement “I have a moral requirement to save children” is also true. However, this would be mistaken under the reductionist view.
Under the reductionist view, if someone says that I have a moral requirement to save children, all they mean is that they have a desire that I save children. However, this need not imply that I have a desire to save children. In fact, under the reductionist view, when I say that “I have a moral requirement to save children”, I’m not at all talking about what other people desire–I’m talking about what I desire.
Put another way:
We take it that if Jane says “you have a moral requirement to save children” and what she says is true, that this would imply that my own statement “I have a moral requirement to save children” is also true. However, under the reductionist view, Jane’s statement being true would not imply that my statement was true. Jane’s statement being true–which would be reduced to–that it’s true that they have a desire for me to save children, doesn’t imply that I have a desire to save children. Thus, it seems that the reductionist view doesn’t seem to be able to make sense of this phenomenon.
Disagreement About Requirements
On a related note, the reductionist view doesn’t seem to be able to account for disagreements about what moral requirements we have.
Jane: “You have a moral requirement to save children”.
Me: “I don’t have a moral requirement to save children”.
Under the reductionist view, there would be no disagreement here, as all Jane is saying is just that she has a desire that I save children, whilst I’m just saying that I don’t. But clearly there seems to be a disagreement here.2
Two Objections
1st Objection: Moral Requirement Talk & Desire Talk
It might be objected that the reductionist view only posits that talk of moral requirements is reducible to talk about our desires, and not that all talk of our desires refers to talk of moral requirements. Thus, in saying that I want someone to do something, I needn’t be saying that they’re morally required to do it. In some cases I am saying this, but not in every case.
The issue I have with this objection is that it raises a seemingly insuperable demarcation problem.3 How could we demarcate desire talk of a moral requirement kind and a non-moral requirement kind, when they’re just essentially the same thing? To see this point, consider the following:
Talk of moral requirements is just equivalent to talk about one’s own desires, and as a result, there is no difference in meaning from moral requirement talk and desire talk. Hence the demarcation problem for those who hold the reductionist view, and wish to maintain that some desire talk is not equivalent to moral requirement talk–namely talk about us having desires that non-agents perform certain actions and inactions, and that others complete tasks they cannot accomplish.
Consider an analogy. Suppose we believe that science is just the practice of observing the physical world–this would create a demarcation problem insofar as practices like ghost hunting don’t seem to be scientific–they’re pseudo-scientific. Yet ghost hunting would be counted as scientific under this view–hence a demarcation problem would arise between that of separating science from pseudo-science.
2nd Objection: Only Absurd on the Non-Reductionist View
It might be objected that these entailments are only problematic under the notion that moral requirements aren’t just desires. After all, it’s not absurd to say that you have a desire for a non-agent to commit some action, or for someone to do something they can’t–this is just what the desire reductionist view amounts to in the end (or so the objection goes).
But, this objection fails to recognise the importance of explanandum–stuff that needs explaining. Regarding moral requirements, there seem to be a variety of things that need to be explained–in that, they’re accounted for by a theory–or explained away. For example, an explanandum of moral requirements (or perhaps, just morality in general) is that non-agents don’t have moral requirements. Another explanandum would be that agents aren’t morally required to do something if they can’t do it (ought implies can). So on and so forth.
If what I’ve said thus far is accurate, the reductionist view of requirements and reasons is unable to explain these explanandum. Thus, the view will have to argue that they’re not actually explanandum–they’re not something that needs explaining. This is what proponents of desire reductionists have to provide a plausible story of. Though, given how plausible both of these are as explanandum (amongst others), I don’t envy the position of these reductionists.
A nice analogy was presented to me that I feel captures this idea nicely. Consider a case in which you look up at a clock that says it’s 7:30 am. You come to believe that it’s 7:30 am. As a matter of fact, it really is 7:30 am, however the clock is broken–a broken clock is still right 2 times a day, and you just got lucky.
This is one example of what’s referred to as a Gettier case (named after the philosopher Edmund Gettier). These cases aim to provide counter examples to the view that knowledge is justified true belief. As in the case of the broken clock, we came to have a true justified belief about the time, yet we’re not willing to say that we knew what the time was. Suppose someone responded to this Getter case by asserting that it’s not absurd to think that we knew the time, since having knowledge just consists in having justified true beliefs–and we have that. I don’t think I’m alone in thinking that such a response would be unpersuasive.
In the end
I think there are some serious challenges facing “desire reductionist” views of moral requirements. From what I can tell, the challenges I’ve laid out here seem to be insuperable–beyond rejecting what I’ve identified as explanandum.4 In brief, I take these explanandum to be…
The Normative Explanandum: Requirements (and reasons) are essentially normative.
→ They’re the sorts of things that have a normative weight, i.e. they count in favour of doing certain things.
Agent Explanandum: Moral requirements don’t apply to non-agents.
Ought Implies Can Explanandum: Agents can’t be morally required to do something if they cannot perform that act.
The Implication Explanandum: If A exclaims that B (or anyone else) has a moral requirement to X, and what A says is true, then this implies that B’s own statement about having this moral requirement is also true.
The Disagreement Explanandum: There is disagreement about what moral requirements we have.
This problem was highlighted to me by a friend of mine.
I know there will be some who maintain that, actually, there is still a disagreement going on, even if there is no contradictory statements being made. However, I think such views are mistaken. It just doesn’t make much sense to my why both of these individuals would be disagreeing if all they take themselves to be talking about is their own desires. Perhaps they’re talking past one another? That would be a interesting idea, however, it would imply that they don’t take themselves to be just talking about their desires.
I feel this point would also be relevant regarding the An Implication of Accurate Requirement Talk section.
Just means there is a problem with separating one thing from another.
For instance, they might reject ought implies can as something that needs to be explained, e.g. ought does not actually imply can.







A response:
https://open.substack.com/pub/travistalks/p/on-my-view-about-moral-requirements
> For instance, under this view, if I were to say “Jane has a moral requirement to save drowning children”, it would just mean “I have a desire that Jane save drowning children”. <
The rest of your article rests on this "identification" of what the reductionist means. Since you never give us an example of anyone who holds this position, I can't really say whether that's an accurate statement of their position. ("They" who?!?) But for anyone who does hold that position precisely, I think you do a fine job of refuting it.
I **can** say that it's not an accurate statement of my position, even tho' I would say that moral requirements are based on desires (moderated by other factors).
Also, you mention in your "1st Objection" subsection the possibility that someone might say "in saying that I want someone to do something, I needn’t be saying that they’re morally required to do it." I find it hard to believe that anyone would say anything different! "I'd like you to marry me" == "You are morally obligated to marry me"? Only villains could hold such a belief. Possibly even only pantomime villains. It seems to me that you could have just used that example and saved yourself a lot of work!