A problem for moral realism

“There are no objective values.” That is the proclamation with which J.L. Mackie (1917-1981) opened his Ethics: Inventing Right and Wrong (1977). The notion of what it would mean for there to be “objective values” is more than a little slippery, but one interpretation would be something like this: if there are objective values, there over some domain of important decisions, then there is a right way to decide, and someone will decide that way unless vis rationality is somehow deficient. The notion of objective values is is a seductive thought for anyone who has a way they want the social world ordered: my way is not just my preference, for the universe and reason are on my side!

Extrapolative reflection on a thought experiment offered by my one-time teacher Robert Nozick (1938-2002) gives me a small argument in favor of thinking that Mackie was right.

The experience machine is part of a thought experiment in Robert Nozick’s famous 1974 book Anarchy, State, and Utopia (pp. 42-3).

Suppose there were an experience machine that would give you an experience you desired. Superduper neuropsychologists could stimulate your brain so that you would think and feel you were writing a great novel, or making a friend, or reading an interesting book. Al the time you would be floating in a tank, with electrodes attached to your brain. Should you plug into this machine for life, preprogramming your life’s experiences? If you are worried about missing out on desirable experiences, we can suppose that business enterprises have researched thoroughly the lies of ma others. You can pick and choose from their large library or smorgasboard of such experiences, selecting your life’s experiences for, say, the next two years. After two years have passed, you will have ten minutes or ten hours our of the tank, to select the experiences of your next Of course, while in the tank, you won’t know that you’re there; you’ll think it’s all actually happening. Other can plug in to have the experiences they want, so there’s no need to stay unplugged to serve them. (Ignore problems such as who would service the machines if everyone plugs in.) Would you plug in?

And Nozick thinks that the answer to this question is that of course you wouldn’t. He thinks that three objections are salient: (1) We want to actually do certain things rather than just have the experience of doing them, (2) we want to be a certain kind of person and there is no such things as being any kind of person if all you are having is pre-programmed experiences and (3) plugging into the machine limits us to a kind of only “man-made reality.” Nozick concludes that his though experiment shows us that we must care about things other than our experiences. Plugging into the machine for life would be “a kind of suicide” and therefore something to be avoided. (Actually, it would be a kind of bioexodos, but let that slide for now.) If Nozick is right about that, we might conclude, inter alia, that theories of value that reduce all value to tates of mind (like hedonism) must necessarily be false.

Now there is a slight problem, perhaps, with Nozick’s thought experiment, in that if you’re really a hedonist, you shouldn’t seek out those experiences you most want so much as those experiences what you would most enjoy, the two classes of experiences not being necessarily the same unless you have some sort of omniscience with respect to those very experiences and your plausible reaction to them. But we can fix this minor problem with an emendation to Nozick’s experiment. In place of being handed a menu of experiences to chose from and then experience, you would be given only the knowledge that the experiences you have will be the ones you would happen to find optimally enjoyable. (What they might be might turn out to be something of a surprise, even to you!) Call this improved version of the Experience Machine the Hedonic Machine. Its origins are a little harder to specify than those of the Experience Machine, but that would hardly make them unimaginable. Perhaps Eliezer Yudkowsky and his ever-clever friends will finally manage to produce a Friendly Superhuman Artificial Intelligence that will in turn build us a Hedonic Machine.

The Hedonic Machine so conceived wouldn’t be vulnerable to Nozick’s third complaint because, being the product of superhuman rather than merely human intelligence the experiences it might provide would transcend those conceived-of or conceivable-of by humans. But it might still be objectionable on the first two grounds that Nozick cites. What one might experience in the Hedonic Machine still wouldn’t be “real” if you were plugged into it you wouldn’t be “really like” anything. (You might be having the experience fighting and slaying terrible monsters in the machine, but that wouldn’t make you “really” tough or courageous or whatever.) Probably most people would be Non-Plugger-Inners.

But you know what? If I had such a machine available (and so did everyone else) I would plug right in for the rest of my life. I would be a Plugger-Inner, and I’m sure at least a few other people would be as well.

Does the existence of both Non-Plugger-Inners and Plugger-Inners represent a threat to the notion of objective values? On one possible view would be that it doesn’t. After all, even the most thoroughgoing moral realist doesn’t think there’s a right answer about everything. Some people (myself included) like to eat raw oysters. Others are repelled at the thought of eating them. But no one thinks there’s an objective to-be-eatenness or to-be-avoidedness property about raw oysters that determines the right answer of to eat or not to eat. Surely some areas objective values do not reach.

The analogy does not plausibly extended to the Hedonic Machine case. Here we are speaking of a case that matters crucially to how a decisionmaker will spend the rest of vis life. The Non-Plugger-Inner (if a moral realist) thinks the Plugger-Inner is making a horrifying mistake, “a kind of suicide.” And imagine the greater horror at the suggestion that people should be compelled, (for their own, vastly greater, good) to be plugged into Hedonic Machines. Reversing the matter, a Plugger-Inner (if a moral realist) would think that the Non-Plugger-Inner is making a horrible mistake, living out a life in the vale of tears that is earthly existence, as opposed to the paradise of an existence in the Hedonic Machine. And imagine the reciprocal horror at the thought that perhaps the Hedonic Machine might be outlawed for the supposed greater good of the would-be Plugger-Inners who might be otherwise tempted to “a kind of suicide.” (Much like the rationale used to outlaw the use of many psychoactive drugs in contemporary life, by the way). The issues reached by the Hedonic Machine are too important, too global, just to be treated as matters of taste. If you think you believe in objective values but don’t think the domain of objective values reaches the choice of whether to plug into the Hedonic Machine or not, then you’re pretty much in Mackie’s camp already. Any scheme of objective values, in order not to be trivial, would have to reach the problem posed by the Hedonic Machine.

But now here’s a problem, an illustration of a general problem faced by people who think there are objective values. If there are objective values, then we must, at a minimum, be able to identify some defect of rationality in either the Plugger-Inner or the Non-Plugger-Inner. Because if a non-trivial scheme of objective values somehow obtains, then they can’t both be right.

But I have thought about this matter for rather a long time. I certainly don’t find any defect of rationality in myself. And interestingly, although I am out of sympathy with them, I don’t find one in the Non-Plugger-Inners either. And that fact may go a long way to showing why I find myself in Mackie’s camp.

2 thoughts on “A problem for moral realism

  1. Tiny quibble: I imagine we could find some PETA members or ethical vegans who *do* think there’s “an objective to-be-eatenness or to-be-avoidedness property about raw oysters that determines the right answer of to eat or not to eat.” But it doesn’t undermine your argument — it just means you need a more ethically-bland example to illustrate it.

Leave a Reply

Your email address will not be published. Required fields are marked *