Friday, April 20, 2018

Non-instrumental pursuit

I pursue money instrumentally—for the sake of what it can buy—but I pursue fun non-instrumentally.

Here’s a tempting picture of the instrumental/non-instrumental difference as embodied in the money fun example:

  1. Non-instrumental pursuit is a negative concept: it is instrumental pursuit minus the instrumentality.

But (1) is mistaken for at least two reasons. The shallower reason is an observation we get from the ancients: it is possible to simultaneously pursue the same goal both instrumentally and non-instrumentally. You might have fun both non-instrumentally and in order to rest. But then lack of instrumentality is not necessary for non-instrumental pursuit.

The deeper reason is this. Suppose I am purely instrumentally pursuing money for the sake of what it can buy, but I then remove the instrumentality, either by ceasing to pursue things that can be bought or by ceasing to believe that money can buy things, without adding any new motivations to my will. Then clearly the pursuit of money rationally needs to disappear—if it remains, that is a clear case of irrationality. But if non-instrumental pursuit were simply an instrumental pursuit minus the instrumentality, then why wouldn’t the removal of the instrumentality from my pursuit of money leave me non-instrumentally and rationally pursuing money, just as I non-instrumentally and rationally pursue fun?

There is a positive element in my pursuit of fun, a positive element that would be lacking in my pursuit of money if I started with instrumental pursuit of money and took away the instrumentality and somehow (perhaps per impossibile) continued (but now irrationally) pursuing money. It is thus more accurate to talk of “pursuit of a goal for its own sake” than to talk of “non-instrumental pursuit”, as the latter suggests something negative.

The difference here is somewhat like the difference between the concepts of an uncaused being and a self-existent being. If you take away the cause of a brick and yet keep the brick (perhaps per impossibile), you have a mere uncaused being. That’s not a self-existent being like God is said to be.

Thursday, April 19, 2018

Affronts to human dignity

Some evils are not just very bad. They are affronts to human dignity. But those evils, paradoxically, provide an argument for the existence of God. We do not know what human dignity consists in, but it isn’t just being an agent, being really smart, etc. For human dignity to play the sort of moral role it does, it needs to be something beyond the physical, something numinous, something like a divine spark. And on our best theories of what things are like if there is no God, there is nothing like that.


  1. There are affronts to human dignity.

  2. If there are affronts to human dignity, there is human dignity.

  3. If there is human dignity, there is a God.

  4. So, there is a God.

This argument is very close to the one I made here, but manages to avoid some rabbit-holes.

Wednesday, April 18, 2018

Van Inwagen on evil

Peter van Inwagen argues that because a little less evil would always serve God’s ends just as well, there is no minimum to the amount of evil needed to achieve God’s ends, and hence the arguer from evil cannot complain that God could have achieved his ends with less evil. Van Inwagen gives a nice analogy of a 10-year prison sentence: clearly, he thinks, a 10-year sentence can be just even if 10 years less a day would achieve all the purposes of the punishment just as well.

I am not convinced about either the punishment or the evil case. Perhaps the judge really shouldn’t choose a punishment where a day less would serve the purposes just as well. I imagine that if we graph the satisfaction of the purposes of punishment against the amount of punishment, we initially get an increase, then a level area, and then eventually a drop-off. Van Inwagen is thinking that the judge is choosing a punishment in the level area. But maybe instead the judge should choose a punishment in the increase area, since only then will it be the case that a lower punishment would serve the purposes of the punishment less well. The down-side of choosing the punishment in that area is that a higher punishment would serve the purposes of the punishment better. But perhaps there is a moral imperative to sacrifice the purposes of punishment to some degree, in the name of not punishing more than is necessary. Mercy is more important than retribution, etc.

Similarly, perhaps, God should choose to permit an amount of evil that sacrifices some of his ends (ends other than the minimization of evil), in order to ensure that the amount of evil that he permits is such that any decrease in the evil would result in a decrease in the satisfaction of God’s other ends. If van Inwagen is right about there not being sharp cut-offs, then this may require God to choose to permit an amount of evil such that more evil would have served God’s other ends better.

The above fits with a picture on which decrease of evil takes a certain priority over the increase of good.

Tuesday, April 17, 2018

In vitro fertilization and Artificial Intelligence

The Catholic Church teaches that it is wrong for us to intentionally reproduce by any means other than marital intercourse (though things can be done to make marital intercourse more fertile than it otherwise would be). In particular, human in vitro fertilization is wrong.

But there is clearly nothing wrong with our engaging in in vitro fertilization of plants. And I have never heard a Catholic moralist object to the in vitro fertilization of farm animals.

Suppose we met intelligent aliens. Would it be permissible for us to reproduce them in vitro? I think the question hinges on whether what is wrong with in vitro fertilization has to do with the fact that the creature that is reproduced is one of us or has to do with the fact that it is a person. I suspect it has to do with the fact that it is a person, and hence our reproducing non-human persons in vitro would be wrong, too. Otherwise, we would have the absurd situation where we might permissibly reproduce an alien in vitro, and they would permissibly reproduce a human in vitro, and then we would swap babies.

But if what is problematic is our reproducing persons in vitro, then we need to look for a relevant moral principle. I think it may have something to do with the sacredness of persons. When something is sacred, we are not surprised that there are restrictions. Sacred acts are often restricted by agent, location and time. They are something whose significance goes beyond humanity, and hence we do not have the authority to engage in them willy-nilly. It may be that the production of persons is sacred in this way, and hence we need the authority to produce persons. Our nature testifies to us that we have this authority in the context of marital intercourse. We have no data telling us that we are authorized to produce persons in any other way, and without such data we should not do it.

This would have a serious repercussion for artificial intelligence research. If we think there is a significant chance that strong AI might be possible, we should stay away from research that might well produce a software person.

The independence of the attributes in Spinoza

According to Spinoza, all of reality—namely, deus sive natura and its modes—can be independently understood under each of (at least) two attributes: thought and extension. Under the attribute of thought, we have a world of ideas, and under the attribute of extesion, we have a world of bodies. There is identity between the two worlds: each idea is about a body. We have a beautiful account of the aboutness relation: the idea is identical to the body it is about, but the idea and body are understood under different attributes.

But here is a problem. It seems that to understand an idea, one needs to understand what the idea is about. But this seems to damage the conceptual independence of the attributes of thought and extension, in that one cannot fully understand the aboutness of the ideas without understanding extension.

I am not sure what to do about this.

Monday, April 16, 2018

The Repugnant Conclusion and Strong AI

Derek Parfit’s Repugnant Conclusion says that, on standard utilitarian assumptions, if n is sufficiently large, then n lives of some minimal level of flourishing will be better any fixed size society of individuals that greatly flourish.

I’ve been thinking about the interesting things that you can get if you combine the Repugnant Conclusion argument with strong Artificial Intelligence.

Assume utilitarianism first.

Given strong Artificial Intelligence, it should be possible to make a computer system that achieves some minimal level of human-like flourishing. Once that is achieved, economies of scale become possible, and I expect it should be possible to replicate that system a vast number of times, and to do so much more cheaply per copy than the cost of supporting a single human being. Note that the replication can be done both synchronically and diachronically: we should optimize the hardware and software in such a way as to make both lots of instances of the hardware and to run as many flourishing lives per day as possible. Once the program is written, since an exact copy is being run for each instance with the same inputs, we can assure equal happiness for all.

If strong AI is possible, generating such minimally flourishing AI and making a vast number of replicates seems a more promising way to increase utility than fighting disease and poverty among humans. Indeed, it would likely be more efficient to decrease the number of humans to the minimum needed to serve the great number of duplicates. At that point, the morally best thing for humans to do will be to optimize the hardware to allow us to build more computers running the happy-ish software and to run each life in as short an amount of external time as possible, and to work to increase the amount of flourishing in the software.

Now note an interesting difference from the traditional Repugnant Conclusion. It seems not unlikely that if strong AI becomes achieved, we will be able to repeatably, safely and cheaply achieve in software not just the minimal levels of human-like flourishing, but high levels of human-like flourishing, even of forms of flourishing other than the pleasure or desire fulfillment that classical utilitarian theories talk about. We could make a piece of software that quickly and cheaply enjoys the life of a classical music afficionado, enjoying the best examples of human classical music culture, and that has no hankering for anything more. And if compatibilism is true (and it is likely that it is true if strong AI is true), then we could make a piece of software that reliably engages in acts of great moral heroism in its simulated world. We lose a bit of value from the fact that these acts only affect a simulated world, but we gain by being able to ensure that no immoral activity mars the value. If we are not certain of the correct axiology, we could hedge our bets by making a software life that is quite flourishing on any plausible axiology: say one that combines pleasure, desire satisfaction, enjoyment of the arts and virtuous activity. And then just run vast numbers of copies of that life per day.

It is plausible that, unless there is some deep spiritual component to human flourishing (of a sort that is unlikely to be there given the materialism that seems needed for strong AI to be possible), we will not only be able to more efficiently increase the sum good by running lots of copies of a happy life than by improving human life, but we will be able to more efficiently improve on the average good.

But one thing is unchanged. The conclusion is still repugnant. A picture of our highest moral imperative being the servicing of a single computer program run on as many machines as possible repeatedly as quickly possible is repugnant.

A tempting objection is to say that multiple copies of the same life count as just one. That’s easily fixed: a well-controlled amount of algorithmic variation can be introduced into lives.

Observe, too, that the above line of thought is much more practical than the original Repugnant Conclusion. The original Repugnant Conclusion is highly theoretical, in that it is difficult to imagine putting into place the kind of society that is described in it without a significant risk of utility-destroying revolution. But right now rich philanthropists could switch their resources from benefiting the human race to working to develop a happy AI (I hesitate to write this sentence, with a slight fear that someone might actually make that switch—but the likelihood of my blog having such an effect seems small). One might respond to the Repugnant Conclusion that all ethical theories give implausible answers in some hypothetical cases. But the case here is not hypothetical.

We can take the above, just as the original Repugnant Conclusion, to be a reductio ad absurdum against utilitarianism. But it seems to be more than that. Any plausible ethics has to have a consequentialist component, even if pursuit of the consequences is restricted by deontic considerations. So on many competing ethical theories, there will still be a pull to the conclusion, given the vast amount of total value, and the respectable amount of average (and median) value achieved in the repugnant proposal. And one won’t be able to resist the pull by denying the picture of value that underwrites utilitarianism, because as noted above, “deeper” values can be achieved in software, given strong AI.

I can think of three plausible ways out of the strong AI version of the Repugnant Conclusion:

  1. The correct axiology lays great stress on the value of deep differences between lives, deeper than can be reliably and safely achieved through algorithmic variation (if there is too much variation, we risk producing misery).

  2. There is a deontic restriction prohibiting the production of software-based persons, perhaps because it is wrong for us to have such a total influence over the life of another person or because it is wrong for us to produce persons by any process other than natural reproduction.

  3. Strong AI is impossible.

I am inclined to think all three are true. :-)

Friday, April 13, 2018

Impairment and non-human organisms

Consider a horse with three legs, a bird with one wing, an oak tree without bark, and a yeast cell unable to reproduce. There is something that all four have in common with each other, and which they also have in common with the human who has only one leg. And it seems to me to be important for an account of disability to acknowledge that which all these five organisms have in common. If the right account of disability is completely disjoined from anything that happens in non-human organisms—or even from anything that happens in non-social organisms—then there is another concept in the neighborhood that we really should also be studying in addition to disability, maybe “impairment”.

Moreover, it seems clear the thing that the five organisms in my examples have in common is bad as far as it goes, though of course it might be good for the organism on balance (the one-winged bird might be taken into a zoo, and thereby saved from a predator).

Thursday, April 12, 2018

Divine authority over us

Imagine a custody battle between Alice and Bob over their child Carl. Suppose the court finds that Alice loves Carl much more than Bob does, that Alice is much wiser than Bob, and that Alice knows Carl and his needs much better than Bob does. Moreover, it is discovered that Bob has knowingly unjustifiedly harmed Carl, while Alice has never done that. In the light of these, it is obvious that Alice is a more fitting candidate to have authority over Carl than Bob is.

But now, suppose x is some individual. Then God loves x much more than I love x, God is much wiser than I, God knows x and his needs much better than I do. Moreover, suppose that I have knowingly unjustifiedly harmed x, while God has never done that. In light of these, it should be plausible that God is a more fitting candidate to have authority over x than I am.

Suppose, however, that I am x. The above is still true. God loves me much more than I love myself; God is much wiser than I; God knows me and my needs much better than I do. And I have on a number of occasions knowingly unjustifiedly harmed myself—indeed, in typical cases when I sin, that’s what has happened—while God has never knowingly unjustifiedly harmed me. So, it seems that God is a more fitting candidate to have authority over me than I am.

I am not endorsing a general principle that if someone loves me more than I love myself, etc., then they are more fit to have authority over me. For the someone might be someone that has little intuitive standing to have authority over me—a complete stranger who inexplicably enormously cares about me might not have much authority over me. But it is prima facie plausible that God has significant authority over me, for the same sorts of reasons that my parents had authority over me when I was a child. And the above considerations suggest that God’s authority over me is likely to be greater than my own authority over myself.

If it is correct that God, if he existed, would have greater authority over me than I have over myself, then that would have significant repercussions for the problem of evil. For a part of the problem involves the question of whether it is permissible for God to allow a person to suffer horrendously even for the sake of greater (or incommensurable but proportionate) goods to them or (especially) another. But it would be permissible for me to allow myself to suffer horrendously for the sake of greater (or incommensurable but proportionate) goods for me or another. If God has greater authority over me than I have over myself, then it would likewise be permissible for God.

This does not of course solve the problem of evil. There is still the question whether allowing the sufferings people undergo has the right connection with greater (or incommensurable but proportionate) goods, and much of the literature on the problem of evil has focused on that. But it does help significantly with the deontic component of the question. (Though even with respect to the deontic aspects, there is still the question of divine intentions—it would I think be wrong even for God to intend an evil for the sake of a good. So care is still needed in theodicy to ensure that the theodicy doesn’t make God out to be intending evils for the sake of goods.)

Wednesday, April 11, 2018

A parable about sceptical theism and moral paralysis

Consider a game. The organizers place a $20 bill in one box and a $100 bill in another box. They seal the boxes. Then they put a $1 bill on top of one of the boxes, chosen at random fairly, and a $5 on top of the other box. The player of the game gets to choose a box, in which case she gets both what’s in the box and what’s on top of the box. Everyone knows that that’s how the game works.

If you are an ordinary person playing the game, you will be self-interestedly rational to choose the box with the $5 on top of it. The expected payoff for the box with the $5 on it is $65, while the expected payoff for the other box is $61, when one has no information about which box contains the $20 and which contains the $100.

If Alice is an ordinary person playing the game and she choses the box with the $1 on top of it, that’s very good reason to doubt that Alice is self-interestedly rational.

But now suppose that I am considering the hypothesis that Bob is a self-interestedly rational being who has X-ray vision that can distinguish a $20 bill from a $100 bill inside the box. Then if I see Bob choose the box with the $1 on top of it, that’s no evidence at all against the hypothesis that he is such a being, i.e., a self-interestedly rational being with X-ray vision. In repeated playings, we’ll see Bob choose the $1 box half the time and the $5 box half the time, if he is such a being, and if we didn't know that Bob has X-ray vision, we would think that Bob is indifferent to money.

Sceptical theism and the infinity of God

I’ve never been very sympathetic to sceptical theism until I thought of this line of reasoning, which isn’t really new, but I’ve just never quite put it together in this way.

There are radically different types of goods. At perhaps the highest level—call it level A—there are types of goods like the moral, the aesthetic and the epistemic. At a slightly lower level—call it level B—there are types of goods like the goods of moral rightness, praiseworthiness, autonomy, the virtue, beauty, sublimity, pleasure, truth, knowledge, understanding, etc. And there will be even lower levels.

Now, it is plausible that a perfect being, a God, would be infinitely good in infinitely many ways. He would thus infinitely exemplify infinitely many types goods at each level, either literally or by analogy. If so, then:

  1. If God exists, there are infinitely many types of good at each level.


  1. We only have concepts of a finite number of types of good at each level.


  1. There are infinitely many types of good at each level that we have no concept of.

Now, let’s think what would likely be the case if God were to create a world. From the limited theodicies we have, we know of cases where certain types of goods would justify allowing certain evils. So we wouldn't be surprised if there were evils in the world, though of course all evils would be justified, in the sense that God would have a justification for allowing them. But we would have little reason to think that God would limit his design of the world to only allowing those evils that are justified by the finite number of types of good that we have concepts of. The other types of good are still types of good. Given that there infinitely many such goods, and only finitely many of the ones we have concepts of, it would not be significantly unlikely that if God exists, a significant proportion—perhaps a majority—of the evils that have a justification would have a justification in terms of goods that we have no concept of.

And so when we observe a large proportion of evils that we can find no justification for, we observe something that is not significantly unlikely on the hypothesis that God exists. But if something is not significantly unlikely on a hypothesis, it’s not significant evidence against that hypothesis. Hence, the fact that we cannot find justifications for a significant proportion of the evils in the world is not significant evidence against the existence of God.

Sceptical theism has a tendency to undercut design arguments for the existence of God. I do not think this version of sceptical theism has that tendency, but that’s matter for another discussion (perhaps in the comments).

Bayesianism and the multitude of mathematical structures

It seems that every mathematical structure (there are some technicalities as to how to define it) could in fact be the correct description of fundamental physical structure. This means that making Bayesianism be the whole story about epistemology—even for idealized agents—is a hopeless endeavor. For there is no hope for an epistemologically useful probability measure over the collection of all mathematical structures unless we rule out the vast majority of structures as having zero probability.

A natural law or divine command appendix to Bayesianism can solve this problem by requiring us to assign zero probability to some structures that are metaphysically possible but that our Creator wants us to be able to rule out a priori.

Monday, April 9, 2018

Reincarnation and theodicy

As I was teaching on the problem of evil today, I was struck by how nicely reincarnation could provide theodicies for recalcitrant cases. “Why is the fawn dying in the forest fire? Well, for all we know, it’s a reincarnation of someone who committed genocide and is undergoing the just punishment for this, a punishment whose restorative effect will only be seen in the next life.” “Why is Sam suffering with no improvement to his soul? Well, maybe the improvement will only manifest in the next life.”

Of course, I don’t believe in reincarnation. But if the problem of evil is aimed at theism in general, then it seems fair to say that for all that theism in general says, reincarnation could be true.

Here is a particular dialectical context where bringing in reincarnation could be helpful. The theist presses the fine-tuning argument. The atheist instead of embracing a multiverse (as is usual) responds with the argument from evil. The theist now says: While reincarnation may seem unlikely, it surely has at least a one in a million probability conditionally on theism; on the other hand, fine-tuning has a much, much smaller probability than one in a million conditionally on single-universe atheism. So theism wins.

Friday, April 6, 2018

Peer disagreement and models of error

You and I are epistemic peers and we calculate a 15% tip on a very expensive restaurant bill for a very large party. As shared background information, add that calculation mistakes for you and me are pretty much random rather than systematic. As I am calculating, I get a nagging feeling of lack of confidence in my calculation, which results in $435.51, and I assign a credence of 0.3 to that being the tip. You then tell me that you you’re not sure what the answer is, but that you assign a credence of 0.2 to its being $435.51.

I now think to myself. No doubt you had a similar kind of nagging lack of confidence to mine, but your confidence in the end was lower. So if all each of us had was their own individual calculation, we’d each have good reason to doubt that the tip is $435.51. But it would be unlikely that we would both make the same kind of mistake, given that our mistakes are random. So, the best explanation of why we both got $435.51 is that we didn’t make a mistake, and I now believe that $435.51 is right. (This story works better with larger numbers, as there are more possible randomly erroneous outputs, which is why the example uses a large bill.)

Hence, your lower reported credence of 0.2 not only did not push me down from my credence of 0.3, but it pushed me all the way up into the belief range.

Here’s the moral of the story: When faced with disagreement, instead of moving closer to the other person’s credence, we should formulate (perhaps implicitly) a model of the sources of error, and apply standard methods of reasoning based on that model and the evidence of the other’s credence. In the case at hand, the model was that error tends to be random, and hence it is very unlikely that an error would result in the particular number that was reported.

Thursday, April 5, 2018

Defeaters and the death penalty

I want to argue that one can at least somewhat reasonably hold this paradoxical thesis:

  • The best retributive justice arguments in favor of the death penalty are sound and there are no cases where the death penalty is permissible.

Here is one way in which one could hold the thesis: One could simply think that nobody commits the sorts of crimes that call for the death penalty. For instance, one could hold that nobody commits murder, etc. But it’s pretty hard to be reasonable in thinking that: one would have to deny vast amounts of data. A little less crazily, one could think that the mens rea conditions for the crimes that call for the death penalty are so strict that nobody actually meets them. Perhaps every murderer is innocent by reason of insanity. That’s an improvement over the vast amount of denial that would be involved in saying there are no murders, but it’s still really implausible.

But now notice that the best retributive justice arguments in favor of the death penalty had better not establish that there are crimes such that it is absolutely morally required that one execute the criminal. First, no matter how great the crime, there are circumstances which could morally require us to let the criminal go. If aliens were to come and threaten to destroy all life on earth with the exception of a mass murderer, we would surely have to just leave the mass murderer to divine justice. Second, if the arguments in favor of the death penalty are to be plausible, they had better be compatible with the possibility of clemency.

Thus, the most the best of the arguments can be expected to establish is that there are crimes which generate strong moral reasons of justice to execute the criminal, but the reasons had better be defeasible. One could, however, think that there defeaters occur in all actual cases. Of course, some stories about defeaters are unlikely to be reasonable: one is not likely to reasonably hold that aliens will destroy all of us if we execute someone.

But there could be defeaters that could be more reasonably believed in. Here are some such things that one could believe:

  • God commanded us to show a clemency to criminals that in fact precludes the death penalty.

  • Criminals being executed are helpless, and killing helpless people—even justly—causes a harm to the killer’s soul that is a defeater for the reasons for the death penalty.

  • We are all guilty of offenses that deserve the death penalty—say, mortal sins—and executing someone when one oneself deserves the death penalty is harmful to one’s character in a way that is a defeater for the reasons for the death penalty.

(I myself am open to the possibility that the first of these could actually be the case in New Testament times.)

Wednesday, April 4, 2018

Group impairment and Aristotelianism

Aristotelians have a metaphysical ground for claims about what is normal and abnormal in an individual: the form of a substance grounds the development of individuals in a teleological ways and specifies what the substance should be like. Thus a one-eyed owl is impaired—while it is an owl, it falls short of the specification in its form.

But there is another set of normalcy claims that are harder to ground in form: claims about the proportions of characteristics in a population. Sex ratios are perhaps the most prevalent example: if all the foals born over the next twenty years were, say, male, then that would be disastrous for the horse as a species. And yet it seems that each individual foal could still be a perfect instance of its kind, since both a male and a female can be a perfect instance of horsehood. Caste in social insects is another example: it would be disastrous for a bee hive if all the females developed into workers, even though each one could be a perfect bee.

The two cases are different. The sex of a horse is genetically determined, while social insect caste is largely or wholly environmental. Still, both are similar in that the species not only has norms as to what individuals should be like but also what the distribution of types of individuals should be. There is not only the possibility of individual but of group impairment. But what is the metaphysics behind these norms?

Infamously, Aristotle interpreters differ on whether forms are individual or common: whether two members of the same species have a merely exactly similar or a numerically identical form. Here is a place where taking forms to be common would help: for then the form could not only dictate the variation between the parts of each organism’s body but also the variation between the organisms in the species. But taking forms to be common would be ethically disastrous, because it would mean that all humans have the same soul, since the soul is the form of the human being.

Here’s my best solution to the puzzle. The form specifies the conditions of the flourishing of an individual. But these conditions can be social in addition to individual. Thus, a perfectly healthy and well-nourished male foal would not be flourishing if it lacks a society with potential future mates. And while each worker bee can internally be a fulfilled worker bee, it is not flourishing if its work does not in fact help support a queen. These social conditions for flourishing are constitutive. It’s not that the lack of a queen will cause the worker bee to die sooner (though for all I know, it might), but that the lack of a queen is constitutive of the worker bee being poorly off.

Once we see that there can be constitutive social conditions for flourishing, it is natural to think that there will be constitutive environmental conditions for flourishing. And this could be the start of an Aristotelian philosophy of ecology.