Tuesday, May 18, 2010

Is there a cost to two-boxing in Newcomb cases?

The following may be well-known to folks in the field, or it may be well-known to them to be mistaken.

It seems to be acknowledged by both sides that it is a significant cost of being causal decision theorist that one has to two-box in the Newcomb Paradox, and hence one loses out in Newcomb cases.

I suspect the causal decision theorist should not grant that this is a cost of the theory, or at least not a significant one. First, distinguish between Newcomb cases where there is weak counterfactual dependence between what one chooses and what the predictor predicted and ones where there is no weak counterfactual dependence. (I say that B weakly counterfactually depends on A provided that B might not have happened had A not happened.)

Case A: Weak counterfactual dependence. David Lewis thought that wherever there was counterfactual dependence between non-overlapping events, there was causal dependence. I think he was wrong. E.g., that God promised A counterfactually depends on A, since the non-occurrence of A entails the non-occurrence of the divine promise, at least given prior linguistic conventions. Nonetheless, I think that counterfactual dependence, and even weak counterfactual dependence, is generally a pretty good indicator of at least explanatory dependence. Perhaps something like this principle is true: If (a) B weakly counterfactually depends on A and (b) there is no event C such that (b1) C is not weakly counterfactually dependent on A and (b2) C together with B entails A, then B is explanatorily dependent on A. (Condition (b) rules out the divine promise case: let C be the linguistic conventions that God's act of promising depends on.) Maybe some additional conditions are needed. Nonetheless, I am optimistic that in the Newcomb case, from the existence of a weak counterfactual dependence between my choice and the prediction it follows that my choice is explanatorily prior to the prediction. (Condition (b) is satisfied, at least assuming one's choice is indeterministic. I don't know what to do about deterministic choices. I don't know if the notion makes sense. Nuel Belnap once said that to make a choice, you have to have choices.) But since the causal decision theorist should be willing to generalize causal dependence to explanatory dependence, the causal theorist in this case says to one-box.

Case B: No counterfactual dependence. In this case, the following counterfactual is true for Tamara the two-boxer: She would have got less had she chosen only one box. For had she chosen only one box, the predictor would still have made the same prediction that was in fact made, and Tamara would have done poorly.

But wouldn't the two-boxer have gotten a lot more had she been a one-boxer? Yes. Here, we need to distinguish two antecedents of counterfactuals:

  1. Tamara has a disposition to one-box in Newcomb cases like this one
  2. Tamara one-boxed in this Newcomb case.
And so:
  1. Had (1) been true, Tamara would have got a lot more than she got.
  2. Had (2) been true, Tamara would have got a bit less than she got.
This means that we need to distinguish between two different questions of rationality. Should Tamara have a disposition to one-box and should Tamara one-box in this case?

Perhaps the causal theorist could say: Tamara should have a disposition to one-box and should two-box in this case. There is nothing absurd about that sort of an answer, and according to SEP, it's a standard move here. There are, after all, standard cases where it is (narrowly self-interestedly) rational to be irrational, such as where you will be killed if you are thought to be a rational witness to a crime.

But there may be a better answer. Fact (3) does not entail that Tamara should have a disposition to one-box. All that fact (3) tells us is that if the only situations Tamara faces are Newcomb ones, then she would do better to have a disposition to one-box. But there are other possible situations where someone with a disposition to two-box would do better. For instance, cases where an evil two-boxing philosopher becomes a dictator and kills everyone who doesn't have a disposition to two-box. (Or cases where people are mistaken in thinking the predictor accurate.) So in Newcomb situations, Tamara would do better were she by disposition a one-boxer. But in crazy two-boxing philosopher situations, Tamara would do better were she by disposition a two-boxer. In other words, both of the competing theories have the consequence that in some worlds it is better to self-induce a disposition to abide by the opposite theory. And hence there seems to be little or no special cost here to being a causal decision theorist.

1 comment:

Alexander R Pruss said...

Amusingly, I just deleted some spam about a boxing match.