Friday, October 20, 2017

Moral Theory V: The Greatest Good

Moral Theory:
I. Intuitionism
II. Authoritarianism
III. Divine Command and Attributes
IV. Ethical Egoism
V. Utilitarianism
VI. Virtue and the Golden Rule

Negative Morality:
Divine Hierarchy

Note: I did not explain good very well in this discussion, so a year later I wrote a better explanation. You can read it here.


So far, as we have examined moral systems in our search for objective morality, we have gotten lucky and found contradictions. However, there may be many moral systems that have no internal contradictions. If we want to be able to compare them objectively, we will have to approach the question from a different angle. Today, we will see what we can find out by starting with observations and building upon them with logic.

So far, we have asked “what is morality?” But when we look closer, we can see that question can be broken up into two: “what is good?” and “what should we do?” The second question looks like it depends on what “good” is, so let’s put it aside for later and consider the first one.

To begin, let’s take a step back even further and ask why we have moral systems in the first place. Why do we care about how we and others choose to live our lives? The answer is because we want to strive to create a state of affairs that satisfies us, about which we can say, “this is good.” Although each of us has different ideas of what “good” is, we all have something in common, in that we want to be satisfied by the way things turn out and the actions we and others took to get there.

Perhaps, then, we are blinded by the actions we think of as good, like helping people, being rewarded for hard work, etc., and are missing the real purpose of morality: to be satisfied with what we and others have chosen to do, and with the results that have come from it. Satisfaction is the goal of every system of morality, which means satisfaction is equal to goodness. The answer to “what is good?” is satisfaction.

Knowing this is not enough, though. We are still left with a bunch of competing moral systems, all of which differ in whose satisfaction matters and what types of satisfaction are emphasized. We need a way to take morality from the subjective view to the objective. 50 years ago, philosopher John Rawls suggested we view the world from the Veil of Ignorance, where we look at the world as if we do not know who we are. From this objective perspective, all people are equal, and no one’s need for satisfaction deserves more weight than anyone else’s. The objective good, then, is to increase the total amount of satisfaction experienced by humanity, and all creatures capable of experiencing it.

This leads us to the moral theory of Utilitarianism, the idea that the best, most moral actions are those that create the greatest good for the greatest number of people. As John Stuart Mill said 150 years ago, “actions are right in proportion as they tend to promote happiness, wrong in proportion as they tend to produce the reverse of happiness.” Mill called it “happiness,” and I call it “satisfaction,” but context shows that what we mean by these words is in principle the same thing.

Utilitarianism brings up a lot of questions whose answers are not immediately clear. For one, how do we deal with the fact that sometimes people get satisfaction out of hurting others? Do we say that some kinds of satisfaction are good and some are bad? That would seem to undermine the whole argument. However, we don’t have to say that; Utilitarianism takes care of it naturally. All satisfaction is good, but it must be totaled up over all people. If someone takes satisfaction at another’s expense, it is usually worse overall than if they had left each other alone, and always worse than if they had worked together to increase both of their satisfaction.

But what if there is no possible way for everyone to be satisfied? What if no matter what anyone does, someone will have to suffer? There is a classic counterargument to Utilitarianism that goes like this: suppose there are five patients in a hospital who each need an organ transplant, or they are going to die. In the waiting room, there is a perfectly healthy person here for a visit. Wouldn’t Utilitarianism say that it would be better to kill the one person in the waiting room and take their organs than to let the five patients die?

The answer is no, as killing someone in the waiting room has broader consequences beyond the six people in the example. If people can be killed for their organs, it creates what I call a Shadow of Fear, a stifling blanket over everyone in society, as they are afraid of being killed for their organs. The amount of satisfaction lost from everyone in society under a Shadow of Fear outweighs the satisfaction of continued life from the five people who were going to die.

What if the doctors cover it up? What if they claim the person in the waiting room died from a heart attack, and so using their organs was justified? This would seem to eliminate the Shadow of Fear, bringing Utilitarianism back under fire. However, such a lie is unstable. If the truth were to come to light, it would create a stronger Shadow of Fear than if they had been honest from the start. Not only would there be a Shadow of Fear about the possibility of being killed for one’s organs, but there would be a further Shadow of Fear about the deception; people would be worried that there are other cover-ups that they don’t know about, and they might be harmed or killed for all kinds of unknown reasons. The mere risk of such a shadow outweighs the satisfaction gained by the five saved patients, and Utilitarianism prevails again.

Another common counterargument regards sacrificing oneself for the greater good. Hypothetically, if there was some kind of monster who would gain tremendous satisfaction from eating you, so much that it would outweigh all the satisfaction you would have in the entire rest of your life, wouldn’t Utilitarianism say that it is best for you to feed yourself to that monster?

Maybe, but maybe not. Humans are extremely bad at predicting all possible futures, so in almost every case, we would have no way to know whether we would have more satisfaction in the rest of our life than the monster would gain by eating us. But there might be some kind of extreme circumstances where we would know, and that seems troubling.

However, we actually make this decision all the time; it’s called eating meat. We sacrifice the futures of animals for our own pleasure, even raising them from birth for the purpose of eating. The amount of satisfaction from the pleasure we get from eating a meal is certainly less than the amount of satisfaction the animals would have experienced in the rest of their life, yet we feel it is justified. So then, so what if Utilitarianism tells us that there are possible circumstances where it would be better to feed ourselves to a monster? What license do we have to complain about the perceived speck in Utilitarianism’s eye, when we have this plank in our own?

What about intentions? Utilitarianism is a consequentialist theory, which means the goodness of an action is determined by its consequences, not the intentions of the person who performed it. This will lead to situations where people with malicious intent end up doing good things, and people with good intent end up doing bad things. At a glance, Utilitarianism seems to say the person with good intent is worthy of condemnation and the person with bad intent is worthy of praise. But this once again ignores the broader context. People with good intentions are more likely to do good in the future, and people with bad intentions are more likely to do bad. So the desire and effort to do good can be as praiseworthy as good consequences, or even more so. It is also noteworthy that it is more satisfying to oneself to have good intentions than bad.

There are still many questions left about Utilitarianism that I don’t have the answer to. What level of satisfaction, if any, is low enough that it is equal to nonexistence, and are there states of living that are worse than death? Would it be better to have billions of people in near-death misery, or millions in luxury? What about animals, whose brains are not powerful enough to have moral intuitions, but can feel pleasure and pain? These questions are interesting puzzles for philosophers to debate over and solve. Despite the uncertainty, I am convinced that Utilitarianism is the objective foundation of morality.

There is one major problem left, though. Utilitarianism is only half the answer; it tells us what “good” is, but it does not tell us how to act. We cannot be obligated to always and only do that which is best, because it is impossible for us to know anything close to the amount of information required to make that kind of decision. Other than that, Utilitarianism does not provide a clear line between good and bad, nor give us instructions on how to increase goodness. So how can we hope to live good, moral lives? The answer is that Utilitarianism is not exclusive. It allows and even encourages other moral systems, including those we have already talked about. Utilitarianism gives a way to know when to follow rules, when to trust your intuitions, when to serve yourself, and provides a foundation for God’s nature and commands. Utilitarianism does not tell us what to do, but gives us a measure by which to compare prescriptive moral systems against each other. Next time, we will look at a final two moral systems in our quest to answer the final question, “how should we live?”

No comments:

Post a Comment