2015-09-15

Utilitarianism And The Race To The Bottom

One way to troll the internet is to take the favorite moral framework of every intelligent and decent human being I know, and tear it apart. But the truth is that I myself have Utilitarian leanings, so this is as much a self-criticism as it is a criticism of others. That's what happens at Stationary Waves: Illusions are punctured and deflated, mercilessly and repeatedly, until only the truth remains.

How Much Do Warm-Fuzzies Matter?

The Washington Post published an article entitled "Traditional Charity Fosters Love. Effective Altruism Doesn't." Resorting to god-talk, the author writes in favor of the old kind of charity:
To Jews and Christians, doing good through works of mercy was how one became good, and thus worthy to stand in God’s presence. For those inspired by this theological vision, there was obviously nothing wasteful at all about such works, no matter their impact. The result of this new vision was the utter transformation of ancient society. The formerly marginalized became visible, even uniquely blessed actors in a great spiritual drama....
This kind of charity may not change the world in the most “logical” way, but it nevertheless has an important effect: It protects, preserves and grows local economies of love. Effective altruism leaves such economies wholly unaccounted for. And when followed to its logical conclusion, it is their enemy.
You already know I'm not here to defend religion, and I'm certainly not going to suggest that creating a loving, utopian community based on JuChrIslamism is a good idea. That's not the take-away from an article like this.

Instead, the take-away is that the personal perspective of the benefactor matters in a charitable transaction. This should be obvious enough, but the reason I became aware of this article in the first place is because an Effective Altruist on my Facebook feed happened to criticize the article because he felt the author was positing that "warm-fuzzies are more important than actually helping people."

At first, it seems like a fair criticism. Why be overly concerned about your internal constructs of a good society when there are people out there who need help?

But there are clearly limits to this. What if, for example, I decided that I despised my children, and so I cashed-out their college funds and put all that money into an Effective Altruism campaign just to spite them? Clearly, no one in their right mind would say that my children's "warm-fuzzies" weren't an important consideration in that case, and I'm sure almost everyone would agree that doing anything to spite someone, regardless of how many other people benefit, is a terrible source of motivation.

But Effective Altruism itself has no particular response here. As long as sufficiently many poor people benefit, my spiteful motivations or the misery of others are completely beside the point. This is not a conclusion that I think most Effective Altruists would be happy with. Thus, we are forced to acknowledge that the psychology of the giver - i.e., "warm-fuzzies" - do, in fact, matter on some level. The only question is to what extent they matter.

Spoiler alert: There is no right answer to that question. It's entirely subjective. Some of us will be happy donating to the local homeless vet, while others of us will be happier donating to something properly sanctioned as "Effective Altruism." It would be wrong to suggest that one kind of "warm-fuzzy" is more correct than the other, but unfortunately, that is precisely the claim advocates of Effective Altruism make.

Ayn Rand: Only Half Right

Ayn Rand, of all people, correctly identified that Utilitarianism was morally bankrupt. She wrote:
“The greatest good for the greatest number” is one of the most vicious slogans ever foisted on humanity. 
This slogan has no concrete, specific meaning. There is no way to interpret it benevolently, but a great many ways in which it can be used to justify the most vicious actions.
What is the definition of “the good” in this slogan? None, except: whatever is good for the greatest number. Who, in any particular issue, decides what is good for the greatest number? Why, the greatest number.
(Here I would add: Who gets to arrive at the final count? Including or excluding certain individuals from the accounting is the easiest way to manipulate Utilitarian's moral conclusions.) She continues:
If you consider this moral, you would have to approve of the following examples, which are exact applications of this slogan in practice: fifty-one percent of humanity enslaving the other forty-nine; nine hungry cannibals eating the tenth one; a lynching mob murdering a man whom they consider dangerous to the community.
Unfortunately, Rand's taste for polemics left this final paragraph with far less impact than it should have. (To up the moral ante, she resorted to an invocation of Godwin's Law. A little more attention to detail, and she would have hit a home run.) Note that each of her absurd counterexamples rely on the set of questions outlined before it. Utilitarianism's moral failure is that the ends are determined by the horde, even if in error, and the makeup of the horde is determined by the demographics of the moment.

Begging The Question Or Brainwashing Yourself - Part II

So Rand's critique is only half-right. Her problem with Utilitarianism is that it leaves moral decision-making up to pressure groups. The real problem with Utilitarianism is that, no matter how "Effective" or "rational" we try to make it, its rationalizations can always over-compensate for flawed decision-making. It's easy to talk yourself into something when one's ethical reputation is on the line; they even have a name for it: Motivated Reasoning.

What makes Utilitarianism a uniquely egregious scourge is that it is (in its current incarnation) put forth as a remedy to Motivated Reasoning, when it is in fact an example of it.

To see this, just consider the infinitely many ways to save a human life. You could pull someone out of a burning building, or you could give them a successful cycle of chemotherapy. You could give a child in the Malaria zone a mosquito net, or you could buy a war refugee a plane ticket to an immigration-friendly nation. You could clean up a city's water supply, or teach that same town how to properly dispense of fecal matter. You could throw yourself in front of a speeding bullet, or you could talk someone down off a ledge, or you could provide an addict with clean needles.

Those are the obvious ways to save lives. Now what about the less-obvious ways? You could abstain from driving in order to lessen the odds of a fatal traffic incident. You could become more diligent about sanitizing your hands and workspaces to prevent a potentially fatal infection from afflicting an immuno-compromised person. You could stop using anti-bacterial soaps to prevent super-bugs from evolving.

The funny thing is that these less obvious things, if widely promoted, would probably save far more lives than Bill Gates' latest initiative in Africa. But most of us won't accept the concept of hand-sanitization-as-an-act-of-altruism, and in fact most people have a hard time with something far simpler, like hand-sanitization-as-a-moral-imperative. The only apparent reason for our refusal to accept these things is that using hand sanitizer doesn't feel altruistic.

Once again, "warm-fuzzies" matter after all. But wasn't Utilitarianism supposed to save us from that?

That's the first sneaky layer of psychological dishonesty in Utilitarianism, the part where we choose the most Utilitarian policy from a list of things we are already prepared to call altruism. Mosquito nets - yes; defensive driving - no. If that were the only sneaky layer of psychological dishonesty in Utilitarianism, then it would be a pretty easy fix.

But it's not. There are many more, and they are all a little sneakier. For example, some Utilitarians will concede that defensive driving is, indeed, a moral imperative, but that this is beside the point; after all, no one chooses between funding a mosquito net and driving defensively. We can do both.

Except, we can't. If you were to invest the right amount of time minimizing your epidemiological impact on the rest of society, you wouldn't have any time left over for figuring out where to spend your marginal charity dollar. Thoughts, and analysis, and cogitation take time. It's easy for the ethicist to sit back and say, "Well, yes, drive defensively, minimize the amount of time spent driving, and only give to Effectively Altruistic causes." But saying that is about as useful as saying, "Be a millionaire and an amazing lover." We all agree that it would be nice to do it. Utilitarianism is about doing what yields the highest utility of all possible courses of action, not just what, in theory would yield the highest utility.

 Real Utilitarianism dictates that if $500,000 makes me, personally, far happier than extending a year's salary to 20 or 30 sourpusses in the Third World, then we must give the money to me, not to the traditional "needy." The experienced among you will note that this is just a version of the Utility Monster problem - except my version doesn't require that I be a monster and doesn't hinge on inflating moral premises to absurd extremes in order to prove a point. There are probably many people in America who could be made happier with a $500,000 lump-sum payment than a year of income could make dozens of poor Bangladeshis happy. (And vice-versa, for that matter, but let's ignore that for the time being.) These people aren't monsters or conceptual arguments, and their existence should pose a real problem for Utilitarian altruists.

Instead, the Utilitarian's Motivated Reasoning deftly dodges the issue by stubbornly proclaiming - without any kind of theory, data, or argument to back it up - that there is no possible way $500,000 could make anyone in America happier than a year's salary could make a poor person happy. The best argument for this notion is the concept of the diminishing marginal utility of money; but that has some problems that I suppose I ought to highlight in the next section.

Personal Utility Is Not A Continuous Function

At best, it's piecewise-continuous. Here's what I mean: Suppose the only luxury good you're interested in buying is a $7 million luxury home, and the only luxury good I'm interested in buying is a $300 pair of shoes. Suppose we both play a Utilitarian Lottery that promises to pay a grand prize of $1,000.

For a $1,000 grand prize, the only person who stands to experience a significant increase in utility is me: I get my $300 pair of shoes, and $700 leftover to basically mindlessly spend because I'm not interested in anything else. If you win, you don't get your luxury home, so you mindlessly spend the full $1,000. Sure, we both get a small increase in utility from some mindless consumption, but at $1,000, I'm the only one who gets to satisfy my wildest dream.

Now flip it around. Let's suppose that the grand prize is a $7 million luxury home. If you win, you get your dream home. If I win, I get something fancy that never mattered to me until I entered some dumb lottery. 

Dedicated Utilitarians will say, "But Ryan, you could always sell the $7 million luxury home, buy the shoes, and give the remainder to me, which gets me almost to achieving my dream home." True, but I don't know anyone who would actually do this, do you? "But they should do this!" Why? Because we should always be stuck to the set of priorities we have at the outset of thought experiments? Because we should never allow a large, unexpected windfall to change the way we achieve our own happiness?

I guess you forgot to update your prior.

Meanwhile, Scott Sumner - having never read the above argument, or having found it totally unpersuasive - insists that increasing consumption taxes on wealthy people is a great idea, because
Yes, poverty in the US is a modest problem (especially compared to other countries, and other periods of history) but it is still a problem. In contrast, forcing Larry Ellison to downshift from a 500-foot yacht to a 400-foot yacht is an utterly trivial problem. If we can solve a small problem by creating another utterly trivial problem—then do it!
The fatal flaw in all this is that Larry Ellison's utility may decrease more than expected by being forced to live in a world in which he can't even choose to have a 500-foot yacht, versus a world in which he voluntarily "downshifts" for altruistic reasons (or not).

Therefore, once again, psychology matters. Giving a homeless person $100 is a nice thing to do; forcing someone else to give a homeless person $100 is, well, weird. If you want to do a good thing, then just go ahead and do it. Why do you have to chop off the port end of some billionaire's yacht in order to feel like an altruist? Seems odd, no?

(Here's an interesting sidebar: According to at least one website, the price differential between 400- and 500-foot yachts is on the order of 33%. Do you know anyone, no matter how rich, who would consider being forced out of a $15-180 million dollar investment "utterly trivial?" Sometimes just spot-checking an economist's wild assumptions puts some important perspective on what he's actually talking about.)

But here's the coup de grace: Venturing an opinion about Larry Ellison's yacht purchases costs Scott Sumner zip, zilch, zero, nada, and many more values equivalent to nothing. Scott Sumner wrote a blog post articulating which issues are important to him, and among them is the moral imperative of forcing rich people to make different choices.

"And only in America do we want the system to force us to do the right thing so we can take the credit. #behavioraleconomics"

Of course the context of that quote was a discussion of narcissism, of fetishizing the image at the expense of the object itself. Wouldn't it be totally weird if Effective Altruists were only really motivated by the trappings of charity, rather than by a genuine interest in helping others?

That Guy, From University

When Jeremy Beer, the guy who wrote that Washington Post article, wants to engage in charity, he wants to help Pete. Pete is a real, living, breathing person with whom Mr. Beer has real-world, eye-contact conversations. It's easy to say that Beer is helping Pete because Beer knows Pete, talks to him, gives him money directly, and ultimately sees where the money goes. He also offers additional help to Pete, non-monetary help.

How many Effective Altruists know the names of the people who benefit from their altruism? How many Effective Altruists know the name of one of the beneficiaries? If Effective Altruism is really about helping people, then shouldn't the Altruists know a thing or two about the people who receive their funds? A first name seems like a reasonable thing to know. How about the exact number of people (not the average, per-dollar number of people) who benefited from the contribution, and in exactly what way they benefited? This all seems reasonable enough. 

You can argue that the names don't matter as long as the people get help. You can argue that there are other websites out there dedicated to keeping charities honest and tracking the benefits of charitable contributions. You can argue all these things, but think about what it means if you do.

It means the particular individuals you help don't matter as much to you as the pure number of people you help. You're not delivering the help, you're just a donor. You're not keeping track of the charity's effectiveness, you're just reading the website. You don't want to actually do any real work here, you just want to ensure that your money is helping the most people, as measured by quantity. Names, faces, details... who cares? The important thing is that your money was spent in the way you deem most efficient.

Does that sound like charity to you? Because it sounds like signalling to me. It sounds like you don't actually care about the people you're helping, you only really care about help, in the abstract. It doesn't sound like an evaluation of whose lives you wish to improve and how you wish to improve them; instead, it sounds like you're only really interested in the bottom-line number of the total number of lives you improved. It's everyone else's job to worry about the details, you just want to find out which hole to put your wad in.

It sounds like a group of people who have come up with a rationalization scheme that maximizes the signalling value of their charity... er, kind of. In fact, all it does is maximize the signalling value of charity in the eyes of people who think stuff like utilitarian calculus and economic theory are cool, e.g. the Less Wrong crowd, and a few other weirdos like yours truly. 

So, back to the top. I'm pretty sure we agreed that giving to charity to spite somebody was not a particularly moral thing to do. Is giving to charity in order to impress your Bayesian Rationalist friends any better?

Isn't it a race to the bottom to cook up clever rationale in service of your donation strategies in hopes of being That Guy, From University, Who Is So Clever With His Economic Theory That He Even Buys Mosquito Nets For Poor People Instead Of Donating Food To The Local Shelter?

I mean, I thought we were donating for the maximum utility of others, not just to look cool. Or was I wrong about what Utilitarianism was supposed to be about? 

No comments:

Post a Comment