Thursday, November 4, 2010

Causal decision theory and determinism

The basic intuition underlying causal decision theory can, I think, be put like this. Imagine Fred, an almost omniscient being who knows everything except what you're going to choose and what is causally downstream from your choice, and who has only your interests in mind. What you ideally would like to do is to make the decision that Fred would be most pleased to hear that you made.

Let K be everything Fred knows, and suppose you're deliberating between options A and B. Then Fred expects you to get U(AK)=E[V|AK] from your choosing A and U(BK)=E[V|BK] from your choosing B, where V is a random variable expressing value-for-you and E[...|...] is conditional expectation.[note 1] If you knew for sure that U(AK)>U(BK), then you would be (self-interestedly—I will omit this from now on) rational in choosing A. In many cases, you don't know for sure which of U(AK) and U(BK) is bigger, either because you don't know K or even its relevant parts[note 2], or because the math is too hard. But if Fred told you, you'd know, and then you could choose it.

There is, however, one family of cases where you know which of U(AK) and U(BK) is bigger: these are cases where domination holds, so that you know that for each of the serious candidate K* for a maximally specific proposition describing everything except your choice and what is causally downstream from your choice (K* is a "dependency hypothesis", in Lewis's terminology) it is the case that U(AK*)>U(BK*) or for each serious candidate K* it is the case that U(AK*)<U(BK*). Thus you should two-box in Newcomb cases, and you get the right answer in medical Newcomb cases.

Now, in practice, you don't know for sure which of U(AK) and U(BK) is bigger (in real life, they're almost certain not to be equal). Moreover, it is a little bit more tricky than just trying to find out what is the most likely ordering between these two unknown numbers. For instance, there are safety issues. If it's 51% likely that U(AK)=U(BK)+1 and 49% likely that U(AK)=U(BK)−1000, even though by choosing A you are more likely than not sending good news to Fred than by choosing B, nonetheless it is safer to choose B; in that case more likely than not you're sending bad news to Fred, but it's only slightly bad news.

What we now have is a Kantian regulative ideal for decision theory:

  1. Try to choose A over B if and only if U(AK)>U(BK).
And more vaguely:
  1. Choose between A over B on the basis of your best estimates as to the difference U(AK)−U(BK).
We can look at causal and evidential decision theory as offering rival ways of choosing in light of these maxims when we are ignorant of K. Standard Lewis decision theory just averages U(AK*) over all relevant dependency hypotheses K*, weighting the average by the unconditional probability of K*. Evidential decision theory does much the same thing, except the weights are conditional probabilities. In both cases, ratification may be added, which is another weighted average comparison.

Medical Newcomb cases show that evidential decision theory's method doesn't always work. Egan cases show that causal decision theory's method doesn't always work. But the cases leave intact the basic intuitions (1) and (2).

Now we are in a position to offer an argument and a suggestion. The argument is this. If determinism holds, then either K—the true dependency hypothesis—is not compatible (barring miracles—there is some technical stuff to take care of in that regard) with A or K is not compatible with B. But if K is not compatible with (say) A then U(AK) is not defined, as it is a conditional expectation on the impossible condition AK. So if determinism holds, then at least one of the values U(AK) and U(BK) is undefined. Therefore, if determinism holds, the maxims (1) and (2) don't make sense. If determinism holds, Fred knows, by knowing everything causally upstream of your choice, what choice you will make, and the question of what choice will be better news to him no longer makes sense. If I am right about the basic intuitions of causal decision theory (and they may even be right if evidential decision theory is on the right track, since as we've seen, evidential decision theory can also be seen as giving us estimates of U(AK) and U(BK)), then in a deterministic world, in deciding one is trying to get estimates of two unknown values one of which is undefined. Consequently, the agent who is sure that determinism holds cannot consistently deliberate, as the regulative ideals make no sense. Moreover, as long as she takes seriously the possibility of determinism holding, we have a problem, as she needs to take seriously the possibility that the dependency hypothesis is such as to make the choice not make sense, and there does not appear to be any way to take that possibility seriously. Therefore, decision theory—or at least causal decision theory—requires agents who disbelieve in determinism.

This does not show that determinism is incompatible with making rational choices. But it does show that determinism is incompatible with making informed rational choices, and it does show that if determinism holds, only those who are wrong about the basic structure of reality can rationally choose. If one adds the anti-skeptical premise that it is possible for us to choose rationally without being wrong about the basic structure of reality, we conclude that determinism is false.

Now, let me say a little bit more about Egan cases and causal decision theory. Causal decision theory, for instance in Lewis's formulation, urged us to compute the unconditional expectations E[U(Ak)] and E[U(Bk)] where k is a random variable ranging over all relevant dependency hypotheses, and the probability space for this expectation is epistemic. (U(Ak) is defined as E[V|Ak], where this expectation is computed via the objective probabilities implied by k.) It should not be surprising that merely averaging the possible values in this simple way isn't always going to generate the best decision. There is, I think, some plausibility to thinking it will generate the best decision if ratifiability holds. So the best we now have may be an incomplete causal decision theory: A is the rational option if (but not necessarily only if) E[U(Ak)]>E[U(Bk)] and E[U(Ak)|A]>E[U(Ak)|A]. In cases where there is no option that satisfies both conditions, like Egan cases, (1) and (2) still apply, and we can muddle through trying to make a decision in light of these vague maxims, and we will, I think, get the right answer. But we don't have a precise procedure for those cases.

No comments: