Wednesday, February 18, 2015

A fallacy of probabilistic reasoning with an application to sceptical theism

Consider this line of reasoning:

  1. Given my evidence, I should do A rather than B.
  2. So, given my evidence, it is likely that A will be better than B.
This line of reasoning is simply fallacious. Decisions in many contexts where deontological-like concerns are not relevant are appropriately made on the basis of expected utilities. But the following inference is fallacious:
  1. The expected utility of A is higher than that of B.
  2. So, probably, A has higher utility than B.
In fact it may not even be possible to make sense of (4). For instance, suppose I am choosing between playing one of two indeterministic games that won't be played without me. I must play exactly one of the two. Game A pays a million dollars if I win, and the chance of winning is 1/1000. Game B pays a hundred, and the chance of winning is still 1/1000. Obviously, I should play game A, since the expected utility is much higher. But unless something like Molinism is true, if I choose A, there is no fact of the matter as to how B would have gone, and if I choose B, there is no fact of the matter as to how A would have gone. So there is no fact of matter as to whether A or B would have higher utility.

But even when there is a fact of the matter, the inference from (3) to (4) is fallacious, due to simple cases. Suppose that a die has been rolled but I haven't seen the result. I can choose to play game A which pays $1000 if the die shows 1 and nothing otherwise, or I have option B which is just to get a dollar no matter what. Then the expected utility of A is about $167 (think 1000/6) and the expected utility of B is exactly $1. However, there is a 5/6 chance that B has higher utility.

The lesson here is that our decisions are made on the basis of expected utilities rather than on the basis of the probabilities of the better outcome.

Now the application. One objection to some resolutions to the problem of evil, notably sceptical theism, is this line of thought:

  1. We are obligated to prevent evil E.
  2. So, probably, evil E is not outweighed by goods.
But this is just a version of the expectation-probability fallacy above. Bracketing deontological concerns, what is relevant to evaluating claim (5) is not so much the probability that evil E is or is not outweighed by goods, but the expected utility of E or, more precisely, the expected utilities of respectively preventing or not preventing E. On the other hand, what is relevant to (6) is precisely the probability that E is outweighed.

One might worry that the case of responses to the problem of evil isn't going to look anything like the cases that provide counterexamples to the expectation-probability fallacy. In other words, even though the expectation-probability fallacy is a fallacy in most cases, it isn't fallacious in the case of (5) and (6). But it's possible to provide a counterexample to the fallacy that is quite close to the sceptical theism case.

At this point the post turns a little more technical, and I won't be offended if you stop reading. Imagine that a quarter has been tossed a thousand times and so has a dime. There is now a game. You choose which coin counts—the quarter or the time—and then sequentially over the next thousand days you get a dollar for each heads toss and pay a dollar for each tails toss. Moreover, it is revealed to you that the first time the quarter was tossed it landed heads, while the first time the dime was tossed it landed tails.

It is clear that you should choose to base the game on the tosses of the quarter. For the expected utility of the first toss in this game is $1, and the expected utility of each subsequent toss is $0, for a total expected utility of one dollar, whereas the expected utility of the first toss in the dime-based game is $(-1), and the subsequent tosses have zero expected utility, so the expected utility is negative one dollar.

On the other hand, the probability that the quarter game is better than the dime game is insignificantly higher than 1/2. (We could use the binomial distribution to say just how much higher than 1/2 it is.) The reason for that is that the 999 subsequent tosses are very likely to swamp the result from the first toss.

Suppose now that you observe Godot choosing to play the dime game. Do you have significant evidence against the hypothesis that Godot is an omniscient self-interested agent? No. For if Godot is an omniscient self-interested agent, he will know how all the 1000 tosses of each coin went, and there is probability that's insignificantly short of 1/2 that they went in such a way that the dime game pays better.

1 comment:

Angra Mainyu said...

Hi Alex,

With regard to the Godot case, I'd like to consider an alternative:
Let's say that Godot makes the choice first, and he chooses to play the dime game (DG). Let H1 be the hypothesis that Godot is an omniscient self-interested agent that chose DG.
Let say that Bob reckons Pr(H1) = 1.
In this case, it's clear that Bob should play DG (assuming always that Bob is self-interested in this context, and assuming he does not change the probabilistic assessment of H1).
So, it seems as long as Bob holds that he should play the QG, he shouldn't assign probability 1 to H1.

Let's say now that Bob only holds that Pr(H1) > 0.5, but doesn't give it a specific value. What game should Bob play?
From the opposite direction, let's say Bob holds that he should play the QT. What's the range of Pr(H1) rationally compatible with that?

That aside, I'm not sure what "expected utility" means in your interpretation of (5).

In (3), going by your assessments, "utility" seems to mean "value" (or at least, be equivalent to it), in the monetary sense.
So, the expected utility of a game is the probability of winning times the money one gets if one wins, $1000. In your second example, EU(A)=$1000*1/6, or about $167 as you say, and EU(B)=$1*1=1.

However, in (5), the same interpretation does not work. I guess it's about moral value, so it would be something like expected moral value, or EMU. But I haven't been able to find a plausible definition of EMU that works in (5). That's not because of the difficulty in assigning specific numbers of moral value. Even if we could - and, say, MV(E)=-8331 (E is an evil, so its moral value is negative) -, I would have trouble understanding (5) in terms of the EMU of preventing vs. not preventing E.
What would we be multiplying here? (even hypothetically?)