Thursday, June 16, 2016

Measure of confirmation

Suppose I played a lottery that involved my picking ten digits. The digits I picked are: 7509994361. At this point, my probability that I won, assuming I had to get every digit right to win, is 10−10. Suppose now that I am listening to the radio and the winning number's digits are announced: 7, 5, 0, 9, 9, 9, 4, 3, 6 and 1. My sequence of probabilities that I am the winner after hearing the successive digits will approximately be: 10−9, 10−8, 10−7, 10−6, 10−5, 10−4, 0.001, 0.01, 0.1 and approximately 1.

If we think that to get a significant amount of evidential confirmation for a hypothesis we need to give a significant absolute increment to the probability, only the final two or three digits gave significant confirmation to the hypothesis that I won, as the last digit increased my probability by 0.9, the one before by 0.09, and the one before by 0.009. But it seems quite wrong to me to say that as I am listening to the digits being announced one by one, I get no significant evidence until the last two or three digits. In fact, I think that each digit I hear gives me an equal amount of evidence for the hypothesis that I won.

Here's an argument for why it's wrong to say that a significant absolute increment of probability is needed to get significant confirmation. Let A be the collective evidence of learning digits 1 through 7 and let B be the evidence of learning digits 8 through 10. Then on the absolute increment view, A provides me with no significant confirmation that I won, but when I learn B after learning A, then B provides me with significant confirmation. However, had I simply learned B, without learning A first, that wouldn't have afforded me significant confirmation--my probability of winning would still have been 10−7. So the combination of two pieces of evidence, both insignificant on their own, gives me conclusive evidence.

There is nothing absurd about this on its own. Sometimes two insignificant pieces of evidence combine to conclusive evidence. Learning (C) that the winning number is written on a piece of paper is insignificant on its own; learning (D) that 7509994361 is written on the piece of paper is insignificant on its own; but combined they are conclusive. Yes: but that is a special case, where the two pieces of evidence fit together in a special way. We can say something about how they fit together: they fail to be statistically independent conditionally on the hypothesis I am considering. Conditionally on 7509994361 being the winning number, the event that 7509994361 is what is written on the paper and the event that what is written on the paper is the winning number are far from independent. But in my previous example, A and B are independent conditionally on the hypothesis I am considering, as well as being independent conditionally on the negation of that hypothesis. They aren't pieces of evidence that interlock like C and D are.

3 comments:

Avi said...

Just use log odds instead of probability, and you have that each piece of evidence offers you the same log odds. All independent pieces of evidence simply sum to combine them.

Heath White said...

I think it's clearer if you just stick with the numbers. After each digit, you have ten times the evidence you had before that you are the winner. That sounds significant! But it's not until the last few digits that the probability that you are NOT the winner declines more than trivially.

Alexander R Pruss said...

geek:
That's how I'd like to think about it.