Friday, April 28, 2017

Saying with possible worlds what can't be said with box and diamond

The literature contains a number of examples of a modal claim that can be made with possible worlds language but not in box-diamond language. Here is one that occurred to me that is simpler than any of the examples I’ve seen:

  • Reality could have been different.

Very simple in possible worlds language: There is a non-actual world. (Note: This doesn’t work on the version of Lewis’s modal realism that allows for duplicate worlds. All the worse for that version.) But no box-diamond statement expresses (*). One can, of course, say that there aren’t any unicorns but could be, which implies (*), but that’s not the same as saying (*).

Fun with St Petersburg

Consider any game, like St Petersburg where the expected payoff is infinite but the prizes are guaranteed to be finite. For instance, a number x is picked uniformly at random in the interval from 0 to 1 not inclusive, and your prize is 1/x.

Suppose you and I independently play this game, and we find our winnings. Now I go up to you and say: “Hey, I’ve got a deal for you: you give me your winnings plus a million dollars, and then you’ll toss a hundred coins, and if they’re all heads, you’ll get one percent of what I won.” That’s a deal you can’t rationally refuse (assuming I’m dead-set against your negotiating a better one). For the payoff for refusing is the finite winnings you have. The payoff for accepting is −1000000 + 2−100⋅0.01⋅(+∞) = +∞.

Wow!

Now let’s play doubles! There are two teams: (i) I and Garibaldi, and (ii) you and Delenn. The members of each team don’t get to talk to each other during the game, but after the game each team evenly splits its winnings. This is what happens. The house calculates two payoffs using independent runs of our St Petersburg style game, w1 and w2. I am in a room with you; Garibaldi is in a room with Delenn. I and Delenn are each given w1; you and Garibaldi are each given w2. Now, by pre-arrangement with Garibaldi, I offer you the deal above: You give me a million, and then toss a hundred coins, and then you get one percent of my winnings if they’re all heads. You certainly accept. And Garibaldi offers exactly the same deal to Delenn, and she accepts. What’s the result? Well, the vast majority of the time, the Pruss and Garibaldi team ends up with all the winnings (w1 + w2 + w1 + w2 = 2w1 + 2w2), plus two million, and the you and Delenn team end up out two million. But about once in 2100 runs, the Pruss and Garibaldi team ends up with 1.99w1 + 1.99w2, plus two million, while you and Delenn end up with 0.01w1 + 0.01w2 − 2000000.

And, alas, I don’t see a way to use Causal Finitism to solve this paradox.

Thursday, April 27, 2017

Materiality and spatiality

I’ve been fond of the theory that materiality is just the occupation of space. But here is a problem for that view.

I have argued previously that we should distinguish between the internal space (or geometry) of an object and external space. Here is quartet of considerations:

  • Imagine a snake one light-year in length out in empty space arranged in a square. Then imagine that God creates a star in the middle of the square. The star instantly disturbs the geometry of space and makes the distances between parts on opposite sides of the square be different from what they previously where. But this does not make any intrinsic change to the snake until physical influence can reach the snake from the star, which will take about 1/8 of a year (the sides of the square will be 1/4 light-years, so the closest any part of the snake is to the center is 1/8 light years). The internal geometry of the snake differs from the external one.

  • We have no difficulty imagining a magical house whose inside is larger than its outside.

  • Christ in the Eucharist has very different (larger!) internal size and geometry from the external size and geometry of where he is Eucharistically located.

  • Thought experiments about time travel and the twin paradox suggest that we should distinguish internal time from external time. But space is like time.

Now, if internal and external space can come apart so much, then it is plausible that an object could have internal space or geometry in the absence of any connection to external space. Furthermore, if a material object ceased to have an occupation relation to external space but retained its internal geometry, it would surely still be material. Only a material object can be a cube. But a cubical object could remain a cube in internal geometry even after losing all relation to external space. But if so, then materiality is not the occupation of external space.

In fact, even independently of the above considerations about internal and external space, it just doesn’t seem that objects are material in virtue of a relation to something beyond them—like external space.

So, it seems, objects aren’t material in virtue of the occupation of external space. Could they be material in virtue of the occupation of internal space? Not substances! A substance does not occupy its internal space. It has that internal space, and is qualified by it, but it seems wrong to say that it is in it in the sense of occupation. (Perhaps the proper parts of material substances do occupy the substance’s internal space.) But some substances, say pigs or electrons, are material. So materiality isn’t a function of the occupation of internal space, either. And unless we find some third sort of space, we can’t say that materiality is a function of the occupation of space.

Perhaps, though, we can say this. Materiality is the possession or occupation of space. Then material substances are material by possessing internal space, and the proper parts of material substances are material by occupying the substance’s internal space. On this view, the materiality of me and my heart are analogically related—a fine Aristotelian idea.

But I have a worry. Point particles may not exist, but they seem conceivable. And they would be material. But a point particle doesn’t seem to have an internal space or geometry. I am not sure what to say. Perhaps, a point particle can be said to be material by occupying external space (in my proposed account of materiality, I didn’t specify that the space was internal). If so, then a point particle, unlike a square snake, would cease to be material if it came to be unrelated to external space. Or maybe a point particle does have an internal zero-dimensional space. It is hard to see what the spatiality of this “space” would consist in, but then we don’t have a good account of the spatiality of space anyway. (Maybe the spatiality of an internal space consists in a potentiality to be aligned with external space?) And, finally, maybe point particles that are points both externally and internally (particles that have non-trivial internal geometry but that are externally point-like aren’t a problem for the view) either aren’t material or aren’t possible.

Wednesday, April 26, 2017

Surviving furlessness and inner earlessness

If we are animals, can we survive in a disembodied state, having lost all of our bodies, retaining only soul or form?

Here is a standard thought:

  1. Metabolic processes, homeostasis, etc. are defining features of being animals.

  2. In a disembodied state, one cannot have such processes.

  3. Something that is an animal is essentially an animal.

  4. So something that is an animal cannot survive in a disembodied state.

But here’s a parody argument:

  1. Fur and mammalian inner ear bones (say) are defining features of being mammals.

  2. In a furless and internally earless state, one cannot have such structures.

  3. Something that is a mammal is essentially a mammal.

  4. So something that is a mammal cannot survive in a furless and internally earless state.

I think 5-7 are no less plausible than 1-3. But 8 is clearly false: clearly, it is metaphysically possible to become a defective mammal that is furless and internally earless.

The obvious problem with 5, or with the inferences drawn from 5, is that what is definitory of being a mammal is being such that one should to have fur and such-and-such an inner ear. The same problem afflicts 2: why not say that being such that one should have these processes and features is definitory of being a mammal.

Person is not a natural kind

  1. God is not a member of any natural kind.

  2. If person is a natural kind, then every person is a member of a natural kind.

  3. God is a person.

  4. So, person is not a natural kind.

Monday, April 24, 2017

Do God's beliefs cause their objects?

Consider this Thomistic-style doctrine:

  1. God’s believing that a contingent entity x exists is the cause of x’s existing.

Let B be God’s believing that I exist. Then, either

  1. B exists in all possible worlds

or

  1. B exists in all and only the worlds where I exist.

(Formally, there are other options, but they have no plausibility. For instance, it would be crazy to think B exists in some but not all the worlds where I exist, or in some but not all the worlds where I don’t exist.)

Let’s consider (3) first. This, after all, seems the more obvious option. God’s beliefs are necessarily correct, so in worlds where I don’t exist, God doesn’t believe that I exist, and hence B doesn’t exist. Then, B is a contingent being that causes my existing. Now apply the Thomistic principle to this contingent being B. It exists, so God believing that B exists is the cause of B’s existing. Let B2 be God’s believing that B exists. Since B2 causes B, B2 must be distinct from B, as causation cannot be circular. Furthermore, if (3) is the right option in respect of B and me, then an analogue for B2 and B should hold: B2 will exist in all and only the worlds where B exists. The argument repeats to generate an infinite regress of divine believings: Bn is God’s believing that Bn − 1 exists and Bn causes Bn − 1. This regress appears vicious.

So, initial appearances aside, (3) is not the way to go.

Let’s consider (2) next. Then B exists in some possible world w1 where I don’t exist. Now, at w1, God doesn’t believe that I exist, since necessarily God’s beliefs are correct. This seems to be in contradiction to the claim that B exists at w1. But it is only in contradiction if it is true at w1 that B is God’s believing that I exist. But perhaps it’s not! Perhaps (a) the believing B exists at the actual world and at w1 but with different content, or (b) B exists at w1 but isn’t a believing at w1.

Let’s think some more about (2). Let w2 be a world where only God exists (I am assuming divine simplicity; without divine simplicity, it might be that in any world where God exists, something else exists—viz., a proper part of God). Then by (2), B exists at w2. But only God exists at w2. So, God is identical to B at w2. But identity is necessary. Thus, God is actually identical to B. Moreover, what goes for B surely goes for all of God’s believings. Thus, all of God’s believings are identical with God.

It is no longer very mysterious that God’s believing that I exist is the cause of my existence. For God’s believing that I exist is identical with God, and of course God is the cause of my existence.

The difficulty, however, is with the radical content variation. The numerically same mental act B is actually a believing that I exist, while at w2 it is a believing that I don’t exist. Furthermore, if truthmaking involves entailment, we can no longer say that B truthmakes that God believes that I exist. For B can exist without God’s believing that I exist.

All this pushes back against (1). But now recall that I only called (1) a “Thomistic-style” doctrine, not a doctrine of St. Thomas. The main apparent source for the doctrine is Summa Theologica I.14.8. But notice some differences between what Aquinas says and (1).

The first is insignificant with respect to my arguments: Thomas talks of knowledge rather than belief. But (1) with knowing in place of believing is just as problematic. Obviously, it can’t be a necessary truth that God knows that I exist, since it’s not a necessary truth that I exist.

The second difference is this. In the Summa, Aquinas doesn’t seem to actually say that God’s knowledge that x exists is the cause of x’s existence. He just says that God’s knowledge is the cause of x’s existence. Perhaps, then, it is God’s knowledge in general, especially including knowledge such necessary truths as that x would have such-and-such nature, that is the cause of x’s existence. If so, then God’s knowledge would be a non-determining cause of things—for it could cause x but does not have it (and, indeed, in those worlds where x does not exist, it does not cause x). This fits well with what Aquinas says in Article 13, Reply 1: “So likewise things known by God are contingent on account of their proximate causes, while the knowledge of God, which is the first cause, is necessary.”

Maybe. I don’t know.

Thoughts on theistic Platonism

Platonists hold that properties exist independently of their instances. Heavy-weight Platonists add the further thesis that the characterization of objects is grounded in or explained by the instantiation of a property, at least in fundamental cases. Thus, a blade of grass is green because the blade of grass instantiates greenness (at least assuming greenness is one of the fundamental properties).

Heavy-weight Platonism has a significant attraction. After all, according to Platonism (and assuming greenness is a property),

  1. Necessarily (i) an object is green if and only if (ii) it instantiates greenness.

The necessary connection between (i) and (ii) shouldn’t just be a coincidence. Heavy-weight Platonism explains this connection by making (ii) explain or ground (i). Light-weight Platonism, which makes no claims about an explanatory connection between (i) and (ii), makes it seem like the connection is a coincidence.

Still, I think it’s worth thinking about some other ways one could explain the coincidence (1). There are three obvious formal options:

  1. (ii) explains (i)
  2. (i) explains (ii)
  3. Something else explains both (i) and (ii).

Option (2) is heavy-weight Platonism. But what about (2) and (3)? It’s worth noting that there are available theories of both sorts.

Here’s a base theory that can lead to any one of (2)–(4). Properties are conceptions in the mind of God. Furthermore, instantiation is divine classification: x’s instantiating a property P just is God classifying x under conception P. It is natural, given this base theory, to affirm (3): x’s instantiating greenness just is God’s classifying x under greenness, and God classifies x under greenness because x is green. Thus, x instantiates greenness because x is green.

But, interestingly, this base theory can give other explanatory directions. For instance, Thomists think that God’s knowledge is the cause of creation. This suggests a view like this: God’s classifying x under greenness (which on the base theory just is x’s instantiating greenness) causes x to be green. On this view, x is green because x instantiates greenness. If the “because” here involves grounding, and not just causation, this is heavy-weight Platonism, with a Thomistic underpinning. Either way, we get (2).

And here is a third option. God wills x to be green. God’s willing x to be green explains both x’s being green and God’s classifying x as green. The latter comes from God’s willing as an instance of what Anscombe calls intentional knowledge. This yields (4).

So, interestingly, a theistic conceptual Platonism can yield any one of the three options (2)–(4). I think the version that yields (3)—interestingly, not the Thomistic one—is the one that best fits with divine simplicity.

Thursday, April 20, 2017

Are we in a computer simulation?

Do we live in a computer simulation?

Here’s a quick and naive thought. We would expect most computer simulations to be of pretty poor quality and limited in scope. If we are in a simulation, the simulation we are in is of extremely high quality and of great scope. That’s not what we would expect on the simulation hypothesis. So, probably, we don’t live in a computer simulation.

But the following argument is pretty convincing: 1. If materialism is true, then probably a computer simulation of a brain can think (since the best materialist theory of mind is functionalism). 2. If a computer simulation of a brain can think, then most thinkers live inside computer simulations.

So, the argument that we don’t live in a computer simulation gives us evidence against materialism.

Animals

Suppose that somewhere in the galaxy there is a planet where there are large six-legged animals with an inner supportive structure, that evolved completely independently of any forms of life on earth and whose genetic structure is not based on DNA but another molecule. What I said seems perfectly possible. But it is impossible if animals are simply the members of the kingdom Animalia, since the six-legged animals on that planet are neither DNA-based nor genetically connected to the animalia on earth.

On the other hand, the supposition that somewhere (maybe in another universe) there is water that does not have H2O in it is an impossible one. So is the supposition that there are horses without DNA.

So the kind animal is disanalogous to the kinds water and horse. The kind water is properly identified with a chemical kind, H2O, and the kind horse is properly identified with a biological species, Equus ferus. But the kind animal does not seem to be properly identified with any biological kind.

One can have DNA-based animals and non-DNA-based animals. If the Venus fly-trap evolved the ability to move from place to place following its prey, it would be an animal, but still a member of Plantae. Animals are characterized largely functionally, albeit not purely functionally, but also in reference to the function of their embodiment—there cannot be any animals that are unembodied.

Is animal a genuine natural kind? Or is it a non-natural kind, constructed in the light of our species’ subjective interests? I don’t know. I take seriously, though, the possibility that there is an "Aristotelian" philosophical categorization that goes across biological categories.

Wednesday, April 19, 2017

How likely are you to be in a random finite subset of an infinite set?

Suppose that out of a set of infinitely many people, including you, a finite subset is chosen at random. How likely are you to be in that subset? Intuitively, not very likely. And the larger the infinity, the less likely.

But how do you pick out a finite subset at random? Here’s a natural way. First, pick out a subset at random, by flipping a fair coin for each person in the original set, and including a person in the subset if the comes up heads. Almost surely, this will generate an infinite subset (a consequence of the law of large numbers). But suppose this experiment is repeated—perhaps uncountably infinitely often—until the set picked out is finite. (This construction requires that the set of potential repetitions be well-ordered.) Or maybe you just get lucky, and to everybody’s surprise the set picked out is finite.

So now we have a method for picking out a finite subset at random (though it may take some luck). How likely are you to be in that finite subset?

Well, think about it step-by-step. Before you learned that the set picked out by the heads was finite, your probability that you were in the set was the probability that your coin landed heads, i.e., 1/2. Then you learn that the set of people for whom heads was rolled is finite. But this fact tells you nothing about your coin toss. For the claim that the set of people with heads is finite is logically equivalent to the claim that the set of people other than you with heads is finite. And the latter claim tells you nothing about your coin toss.

So, your probability needs to stay at 1/2.

Thus, the probability that a random finite subset of the infinitely many people includes you is finite. This is a little counterintuitive when the infinity is countable. And it becomes far more counterintuitive the larger this infinity gets. It is a stupendously implausible claim when that infinity is large, say ℶω.

Causal finitism blocks the story by making it impossible for you to find out that the set of people who got heads is finite.

Tuesday, April 18, 2017

A modified consciousness-causes-collapse interpretation of quantum mechanics

Here are two technical problems with consciousness causes collapse (ccc) interpretations of quantum mechanics. In both, suppose a quantum experiment with two possible outcomes, A and B, of equal probability 1/2.

1. The sleeping experimenter: The experimenter is dreamlessly asleep in the lab and the experiment is rigged to wake her up on measuring A by ringing a bell. If conscious observation causes collapse, then when A is measured, the experimenter is woken up, and collapse occurs. Presumably, this happens half the time. But what happens the other half the time? No conscious observation occurs, so no collapse occurs, so the system remains in a superposition of A and B states. But that means that when the experimenter naturally wakes up several hours later, and then collapse will happen. However, when collapse happens then, it has both A and B outcome options at equal chances. But that means that overall, there is a 75% chance of an A outcome, which is wrong.

2. Order of explanation: The experimenter is awake. On outcome A, a bell rings. On B, a red light goes on. In fact, A is observed. What caused the collapse? It wasn’t the observer’s hearing the bell, because the bell’s occurrence is explanatorily posterior to the collapse. But we said that it is conscious observation that causes the collapse. Which conscious observation was that, if it wasn’t the hearing of the bell? Note that the observer need not have been conscious prior to hearing the bell or seeing the light—the experiment can be rigged so that either the bell or the light wakes up the observer. Perhaps the cause of the collapse was the state of being about to hear a bell or see a red light, or maybe it was the disjunctive state of hearing a bell or seeing a red light. But the former is a strange kind of cause, and the second would be a weird case where the disjunction is prior to its true disjunct.

The first problem strikes me as more serious than the second—the second is a matter of strangeness, while the first yields incorrect predictions.

I’ve been thinking about a curious ccc interpretation that escapes both problems. On this interpretation, the universe branches like in Everett-style multiverse explanations, but a conscious observation in any branch causes collapse. Collapse is the termination of a bunch of branches, including perhaps the termination of the branch in which the collapse-causing observation occurred. The latter isn’t some sort of weird retroactive thing—it’s just that the branch terminates right after the observation.

In case 2, the universe branches into an A-universe and a B-universe (or into pluralities of universes of both sorts). In the A-universe a bell is heard by the observer. In the B-universe a red light is seen by her. When this happens, collapse occurs, and there is no future to the observer after the observation of the red light, because in fact (or so case 2 was set up) it is the observation of A that won out. Or at least this is how it is when the two observations would be simultaneous. Suppose next that the bell observation would be made slightly earlier. Then as soon as the bell observation is made, the B-branch is terminated, and the red light observation is never made. On the other hand, if the light observation is timed to come first, then as soon as the light observation is made in the B-branch, this observation terminates the B-branch, and shortly afterwards the bell is heard in the remaining branch, the A-branch.

Case 1, then, works as follows. The universe branches into an A-universe, with a bell, and a silent B-universe. As soon as the bell is heard in the A-universe, the observation causes collapse, and one of the branches is terminated. If it’s the A-branch that’s terminated, then the observer heard the bell, but the future of that observation is annihilated. Instead, a couple of hours later the observer wakes up in the B-branch, and deduces that B must have been measured. If it’s the B-branch that’s terminated, on the otehr hand, then the observer’s observing of the bell has a future.

Prior to collapse, on this interpretation, we are located in multiple branches. And then our multilocation is wholly or partly resolved by collapse in favor of location in a proper subset of the branches where we were previously located. What happened to us in the other branches really did happen to us, but we never remember it, because it’s not recorded to memory.

On this interpretation, various things are observed by us which we never remember, because they have no future. This is a bit disquieting. Suppose that instead of the red light in case 2, the experimenter is poked with a red hot poker. Then if she hears the bell ring, she is relieved to have escaped the pain. But she didn’t: for if the poking is timed at or before the ringing, then the poking really did happen to her, albeit in another branch and not recorded to memory.

Fortunately for us, the futureless unremembered bad things were very brief: they only lasted for as short a period of time as was needed to establish them as phenomenologically different from the other possible outcome. So in the poked-with-a-poker branch, one only feels the pain for the briefest moment. And that’s not a big deal.

I worry a bit about quantum Zeno issues with this interpretation.

Thursday, April 13, 2017

Lying and killing

It initially seems to be a strange combination of views that (a) killing in defense of the innocent is sometimes permissible, but (b) lying is never permissible, not even in defense of the innocent. Yet that is the predominant view in the Christian tradition. Does this mean that truth is more valuable than life? That doesn't sound right, at least not in general.

I want to try a very speculative solution to this paradox, one I don't want to fully endorse as it raises some further problems. Thomas Aquinas has an interesting position on the lethal defense of the innocent: only officers of the state are permitted to kill intentionally, while private citizens may use defensive means that they foresee could be lethal only if they don't intend death.

Why the difference? Well, here is my crazy thought: perhaps all instances of permissible intentionally lethal defense of the innocent are effectively instances of the death penalty. In emergency situations, where there is an imminent threat to innocents, the state authorizes its officers to execute aggressors on the spot, without the usual legal safeguards. Every instance of permissible killing in a just war is an execution--we just don't call it that, because the emergency context makes very different procedures appropriate. Note, further, that as we learn from John Paul II's Evangelium Vitae, the death penalty is only permissible when there are no other means to the defense of society. Thus the intentionally lethal means to the defense of the innocent can only be deployed as a last resort. That is why, say, prisoners of war are not killed--there is no longer a need for an emergency execution once they are disarmed.

Suppose that this eccentric theory of lethal police and military action is correct. Then it is easy to see why there is a distinction between intentional killing and lying. Permissible intentional killing is an act of justice, an imposition of a just penalty on an aggressor. If we add Boethian idea that it is an intrinsic benefit to one to have justice done to one, then the aggressor is directly benefited by being punished. But even without that idea, the distinction between a defensive act of justice and a merely defensive act seems significant. There is a fine Kantian thought that just punishment constitutes a showing of respect to the person being punished; but a lie is innately disrespectful to the rationality of the person lied to.

Still, the puzzle remains. Why is it that the greater harm of death is appropriate punishment while the lesser harm of being lied to is not? But not every harm is appropriate as a punishment, and sometimes a lesser harm is inappropriate as punishment while a greater is appropriate. Sometimes, this is for reasons of dignity. Thus, it is a lesser harm to lose one's arms than to lose one's life, but judicial amputation is barbaric and contrary to the dignity of the criminal (it is hard to fully explain this intuitive judgment). Sometimes, the lesser harm just wouldn't fit the crime, or maybe ven any crime. Suppose a politician misused her office. Public infamy could be fitting punishment. But while the harm to reputation is greater in public infamy than in gossip, it just wouldn't be a fitting punishment to have officers of the court gossip about the politician behind her back. In fact, being gossiped about simply doesn't seem to be the right sort of harm to be a punishment--maybe it is the essential isolation of it from the consciousness of the person being gossiped about that makes it be inappropriate. I have the intuition that being lied to is pretty much like that--it is essentially isolated from the consciousness of the person being lied to (it's not a lie if they tell you they're lying to you!), and it just doesn't seem the right kind of harm to be a punishment.

The difficulty with this account is that modeling intentionally lethal police and military action as a form of the death penalty suffers from serious problems. The main one is that we have good reason to think that many enemy soldiers, even if their side is opposed to justice, are likely to be non-culpable, because they are likely to be ignorant of the fact that their side is opposed to justice. Perhaps, though, in an emergency situation--and a war is always an emergency--the evidential standards can be much lower, and so we don't need to examine culpability. Another problem is that this account will not allow the police to engage in intentionally lethal action against a clearly insane attacker. But perhaps that's the right conclusion.

Wednesday, April 12, 2017

Types of normativity

It is widely thought that our actions are governed by at least multiple types of normativity, including the moral, the prudential and the epistemic, and that each type of normativity comes along with a store of reasons and an ought. Moreover, some actions—mental ones—can simultaneously fall under all three types of normativity.

Let’s explore this hypothesis. If we make this distinction between types of normativity, we will presumably say that morality is the realm of other-concerned reasons and prudence is the realm of self-concerned reasons. Suppose that at the cost of an hour of torture, you can save me from a minor inconvenience. Then (a) you have a moral reason to save me from the inconvenience and (b) you have a prudential reason not to save me.

It seems clear that you ought to not save me from the inconvenience. But what is this ought? It isn’t moral, since you have no moral reasons not to save me. Moreover, what explains the existence of this ought seem to be prudential reasons. So it seems to be a prudential ought.

But actually it’s not so clear that this is a prudential ought. For a further part of the explanation of why you ought not save me is that the moral reasons in favor of saving me from a minor inconvenience are so very weak. So this is an ought that is explained by the presence of prudential reasons and the weakness of the opposed moral reasons. That doesn’t sound like an ought belonging to prudential normativity. It seems to be a fourth kind of ought—an overall ought.

But perhaps moving to a fourth kind of ought was too quick. Consider that it would be wrongheaded in this case to say that you morally ought to save me, even though all the relevant moral reasons favor saving me and if these were all the reasons you had, i.e., if there were no cost to saving me from inconvenience, it would be the case that you morally ought to save me. (Or so I think. Add background assumptions about our relationship as needed to make it true if you’re not sure.) So whether you morally ought to save me depends on what non-moral reasons you have. So maybe we can say that in the original case, the ought really is a prudential ought, even though its existence depends on the weakness of the opposed moral reasons.

This, however, is probably not the way to go. For it leads to a great multiplication of types of ought. Consider a situation where you have moral and prudential reasons in favor of some action A, but epistemic reasons to the contrary. We can suppose that the situation is such that the moral reasons by themselves are insufficient to make it be the case that you ought to perform A, and the prudential reasons by themselves are insufficient, but when combined they become sufficiently strong in contrast with the epistemic reasons to generate an ought. The ought which they generate, then, is neither moral nor prudential. Unless we’ve admitted the overall ought as a fourth kind, it seems we have to say that the moral and prudential reasons generate a moral-and-prudential ought. And then we immediately get two other kinds of ought in other cases: a moral-and-epistemic ought and a prudential-and-epistemic ought. So now we have six types of ought.

And the types multiply. Suppose you learn, by consulting an expert, that an action has no cost and there are either moral or prudential considerations in favor of the action, but not both. You ought to do the action. But what kind of ought is that? It’s some kind of seventh ought, a disjunctive moral-exclusive-or-prudential kind. Furthermore, there will be graded versions. There will be a mostly-moral-but-slightly-epistemic ought, and a slighty-moral-but-mostly-epistemic ought, and so on. And what if this happens? An expert tells you, correctly or not, that she has discovered there is a fourth kind of reason, beyond the moral, prudential and epistemic, and that some action A has no cost but is overwhelmingly favored by the fourth kind of reason. If you trust the expert, you ought to perform the action. But what is the ought here? Is it "unknown type ought"?

It is not plausible to think that oughts divide in any fundamental way into all these many kinds, corresponding to different kinds of normativity.

Rather, it seems, we should just say that there is a single type of ought, an overall ought. If we still want to maintain there are different kinds of reasons, we should say that there is variation in what kinds of reasons and in what proportion explain that overall ought.

But the kinds of reasons are subject to the same line of thought. You learn that some action benefits you or a stranger, but you don’t know which. Is this a moral or a prudential reason to do the action? I suppose one could say: You have a moral reason to do the action in light of the fact that the action has a chance of benefiting you, and you have a prudential reason to do the action in light of the fact that the action has a chance of benefiting a stranger. But the reason-giving force of the fact that action benefits you or a stranger is different from the reason-giving force of the facts that it has a chance of benefiting you and a chance of benefiting the stranger.

Here’s a technical example of this. Suppose you have no evidence at all whether the action benefits you or the stranger, but it must be one or the other, to the point that no meaningful probability can be assigned to either hypothesis. (Maybe a dart is thrown at a target, and you are benefited if it hits a saturated non-measurable subset and a stranger is benefited otherwise.) That you have no meaningful probability that the action benefits you is a reason whose prudential reason-giving force is quite unclear. That you have no meaningful probability that the action benefits a stranger is a reason whose moral reason-giving force is quite unclear. But the disjunctive fact, that the action benefits you or the stranger, is a quite clear reason.

All this makes me think that reasons do not divide into discrete boxes like the moral, the prudential and the epistemic.

Tuesday, April 11, 2017

My old Right Reason posts

In case anybody is interested, I added a side-bar link to my old posts on the now-defunct Right Reason blog, from about a decade ago. I think some of the arguments I had posted there are still interesting.

GPS signals, normativity and the morality of lying

I will argue that lying is never permissible. The argument is a curious argument, maybe Kantian in flavor, which attempts to establish the conclusion without actually adverting to any explanation of what is bad about lying.

GPS satellites constantly broadcast messages that precisely specify the time at which the message is sent together with precise data as to the satellite orbit. Comparing receipt times of message from multiple GPS satellites with the positions of the satellites, a GPS receiver can calculate its position.

A part of the current design specifications of US GPS satellites is apparently that they can regionally degrade the signal in wartime in order to prevent enemies from making use of the signal (US military receivers can presumably circumvent the degradation).

Now, let’s oversimplify the situation and make up some details (the actual GPS signal specifications are here and the points I am making don’t match the actual specifications), since my point is philosophy of language, not GPS engineering. So I’m really talking about GPS satellites in another possible world.

Suppose that normally the satellite is broadcasting the time n in picoseconds up to a precision of plus or minus ten picoseconds, and suppose that currently we receive a message of n in the time field from a satellite. What does that message mean?

First of all, the message does not mean that the current time is n picoseconds. For the design specifications, I have stipulated, are that there is a precision of plus or minus ten picoseconds. Thus, what it means is something more like:

  1. The current time is n ± 10 ps, i.e., is within 10 ps of n ps.

But now suppose that it is a part of the design and operation specifications that in wartime the locally relevant satellites add a pseudorandom error of plus or minus up to a million picoseconds (remember that I’m making this up). Then what the message field means is something like:

  1. Either (a) this is a satellite that is relevant to a war region, the current time is n ± 106 ps and [extra information available to the military], or (b) the current time is n ± 10 ps.

In particular, when wartime signal degradation happens, the time field of the GPS message is (assuming the satellite is working properly) still conveying correct information—the satellite isn’t lying. For the semantic content of the time field supervenes on the norms in the design and operation specifications, and if these norms specify that wartime degradation occurs, then that possibility becomes a part of the content of the message.

Suppose lying is sometimes morally obligatory. Thus, there will be a sentence “s” and circumstances Cs in which it is both true that s and morally required to say that not s. Suppose Alice is uttering “Not s” in an assertoric way. Morality is part of Alice’s (and any other human being’s) “design and operation specifications”. Thus on the model of my analysis (2) of the semantic content of the (fictionalized) time field of the GPS message, what is being stated or asserted by Alice is not simply:

  1. Not s

but rather:

  1. Either (a) Cs obtains, or (b) not s.

But if that’s the content of Alice’s statement, then Alice is not actually lying when she says “Not s” in Cs. And the same point goes through even if Alice isn’t obligated but is merely permitted to say “Not s” in Cs. The norms in her design and operation specifications make (4) be the content of her statement rather than (3).

In other words:

  1. If lying that s is obligatory or permissible in Cs, then lying is actually impossible in Cs.

But the consequent of (5) is clearly false. Thus, the antecedent is false. And hence:

  1. Lying is never obligatory or permissible.

Note that a crucial ingredient in my GPS story is that the norms governing the degradation of GPS messages are in some way public. If these norms were secret, then the military would be making the GPS satellites do something akin to lying when they degraded their messages. But moral norms are essentially public.

Objection 1: The norms relevant to the determination of the content of a statement are not moral but linguistic norms. The moral norms require that Alice utter “Not s” in an assertoric way only when (4) obtains. But the linguistic norms require that Alice utter “Not s” in an assertoric way only when (3) obtains. And hence (3) is the content of “Not s”, not (4).

Response: This is a powerful objection. But compare the GPS case. We could try to distinguish narrowly technical norms of satellite operation from the larger norms on which GPS satellites are controlled by the US military in support of military aims. That would lead to the thought that the time field of the satellite (on my fictionalized version of the story) would mean (1). But I think it is pretty compelling that the time field of the satellite would mean (2). The meaning of the message needs to be determined according to the overall norms of design and operation, not some narrow technical subset of the specifications. Similarly, the meaning of a linguistic performance needs to be determined according to the overall norms of design and operation of the human being engaging in the performance. And it is precisely the moral norms that are such overall norms.

Second, linguistic norms are norms of voluntary behavior, since linguistic performance is a form of voluntary behavior. But a norm of voluntary behavior that conflicts with morality is null and void insofar as it conflicts, much as an illegal order is no order and an unconstitutional law is no law.

Third, on a view on which linguistic norms have the kind of independence from moral norms that the objection requires, it is difficult to specify what makes them linguistic. For we cannot simply say that they are the overall norms governing linguistic behavior. Moral norms do that, as well. A distinction like the one in the objection would make sense in the case of something where the rules are formalized. Thus, there are circumstances when the rules of chess require one to do something immoral. (For instance, suppose that a tyrant tells you she will kill an innocent unless you move a pawn forward by three squares. The rules of chess require you to refrain from doing that, but it is immoral for you to refrain from it.) But the rules of chess are simply a well-defined set of statements about what constitutes a game of chess, and it is relatively easy to tell if something is a rule of chess or not. But linguistic norms are just some among the many norms governing human behavior, and it is hard to specify which ones they are, if one can't do it by the subject matter of the norms. (I am also inclined to think that the rules of chess might not actually be norms; they are, rather, classificatory rules that specify what counts as a victory, loss, draw or forfeit; the norms governing play are moral.)

Objection 2: Content is not normatively determined.

Response: If that’s right, then my line of argument does fail. But I think a normative picture of content is the right one. In part it’s my Pittsburgh pedigree that makes me want to say that. :-)

Objection 3: Bite the bullet and say that when Alice utters “Not s”, she is in fact asserting (4) and not lying even if Cs obtains. While on this view, technically, lying is never permissible, in practice the view permits the same behaviors as a view on which lying is sometimes permissible.

Response: This just seems implausible. But I wish I had a better response.