Friday, January 19, 2018

A quick argument against some materialisms

  1. Any pretty simple component of us can be replacement by a functionally equivalent prosthesis that isn’t a part of us without affecting our mental functioning.

  2. It is not possible to replace all our pretty simple components by prostheses that aren’t part of us without affecting our mental functioning.

  3. Hence, we are not wholly constituted by a finite number of pretty simple components.

This argument tells against all materialisms that compose us from pretty simple components. How simple is “pretty simple”? Well, simple enough that premise 1 be true. A neuron? Maybe. A molecule? Surely. It doesn’t, however, tell against materialisms that do not compose us from pretty simple components, such as a materialism on which we are modes of global fields.

Wednesday, January 17, 2018

Arbitrariness, probability and infinitesimals

A well-known objection to replacing the zero probability of some events—such as getting heads infinitely many times in a row—with an infinitesimal is arbitrariness. Infinitesimals are usually taken to be hyperreals and there are infinitely many hyperreal extensions of the reals.

This version of the arbitrariness has an objection. There are extensions of the reals that one can unambiguously define. Three examples: (1) the surreals, (2) formal Laurent series and (3) the Kanovei-Shelah model.

But it turns out that there is still an arbitrariness objection in these contexts. Instead of saying that the choice of extension of the reals is arbitrary, we can say say that the choice of particular infinitesimals within the system to be assigned to events is arbitrary.

Here is a fun fact. Let R be the reals and let R* be any extension of R that is a totally ordered vector space over the reals, with the order agreeing with that on R. (This is a weaker assumption than taking R* to be an ordered field extension of the reals.) Say that an infinitesimal is an x in R* such that −y < x < y for any real y > 0.

Theorem: Suppose that P is an R*-valued finitely additive probability on some algebra of sets, and suppose that P assigns a non-real number to some set. Then there are uncountably different many R*-valued finitely additive probability assignments Q on the same algebra of sets such that:

  1. If P(A) is real if and only if Q(A) is real, and then P(A)=Q(A).

  2. All corresponding linear combinations of P and Q are ordinally equivalent to each other, i.e., for any sets A1, ..., An, B1, ..., Bm in the algebra and any real a1, ..., an, b1, ..., bm, we have ∑aiP(Ai)<∑biP(Bi) if and only if ∑aiQ(Ai)<∑biQ(Bi).

  3. P(A)−Q(A) differ by a non-zero infinitesimal whenever P(A) is non-real.

Condition (ii) has some important consequences. First, it follows that ordinal comparisons of probabilities will be equally preserved by P and by Q. Second, it follows that both probabilities will assign the same results to decision problems with real-number utilities. Third, it follows that P(A)=P(B) if and only if Q(A)=Q(B), so any symmetries preserved by P will be preserved by Q. These remarks show that it is difficult indeed to hold that the choice of P over Q (or any of the other uncountably many options) is non-arbitrary, since it seems epistemic, decision-theoretic and symmetry constraints satisfied by P will be satisfied by Q.

Sketch of proof: For any finite member x of R* (x is finite if and only if there is a real y such that −y < x < y), let s(x) be the unique real number such that x − s(x) is infinitesimal. Let i(x)=x − s(x). Then for any real number r > 0, let Qr(A)=s(P(A)) + ri(P(A)). Note that s and i are linear transformations, from which it follows that Qr is a finitely additive probability assignment. It is not difficult to show that (i) and (ii) hold, and that (iii) holds if r ≠ 1.

Remark 1: I remember seeing the s + ri construction, but I can’t remember where. Maybe it was in my own work, maybe in something by someone else (Adam Elga?).

Remark 2: What if we want to preserve facts about conditional probabilities? This is a bit trickier. We’ll need to assume that R* is a totally ordered field rather than a totally ordered vector space. I haven’t yet checked what properties will be preserved by the construction above then.

Free will, randomness and functionalism

Plausibly, there is some function from the strengths of my motivations (reasons, desires, etc.) to my chances of decision, so that I am more likely to choose that towards which I am more strongly motivated. Now imagine a machine I can plug my brain into such that when I am deliberating between options A and B, the machine measures the strengths of my motivations, applies my strengths-to-chances function, randomly selects between A and B in accordance with the output of the strengths-to-chances function, and then forces me to do the selected option.

Here then is a vivid way to put the randomness objection to libertarianism (or more generally to a compatibilism between freedom and indeterminism): How do my decisions differ from my being attached to the decision machine? The difference does not lie in the chances of outcomes.

That the machine is external to me does not seem to matter. For we could imagine that the machine comes to be a part of me, say because it is made of organic matter that grows into my body. That doesn’t seem to make any difference.

But when the randomness problem is put this way, I am not sure it is distinctively a problem for the libertarian. The compatibilist has, it seems, an exactly analogous problem: Why not replace the deliberation by a machine that makes one act according to one’s strongest motivation (or, more generally, whatever motivation it is that would have been determined to win out in deliberation)?

This suggests (weakly) that the randomness problem in general may not be specific to compatibilism, but may be a special case of a deeper problem that both compatibilists and libertarians face.

It seems that both need to say that it deeply matters just how the decision is made, not just its functional characteristics. And hence both need to deny functionalism.

Monday, January 15, 2018

If computers can be free, compatibilism is true

In this post I want to argue for this:

  1. If a computer can non-accidentally have free will, compatibilism is true.

Compatibilism here is the thesis that free will and determinism can both obtain. My interest in (1) is that I think the compatibilism is false, and hence I conclude from (1) that computers cannot non-accidentally have free will. But one could also use (1) as an argument for compatibilism.

Here’s the argument for (1). Assume that:

  1. Hal is a computer non-accidentally with free will.

  2. Compatibilism is false.


  1. Hal’s software must make use of an indeterministic (true) random number generator (TRNG).

For the only indeterminism that non-accidentally enters into a computer (i.e., not merely as a glitch in the hardware) is through TRNGs.

Now imagine that we modify Hal by outsourcing all of Hal’s use of its TRNG to some external source. Perhaps whenever Hal’s algorithms need a random number, Hal opens a web connection to and requests a random number. As long as the TRNG is always truly random, it shouldn’t matter for anything relevant to agency whether the TRNG is internal or external to Hal. But if we make Hal function in this way, then Hal’s own algorithms will be deterministic. And Hal will still be free, because, as I said, the change won’t matter for anything relevant to agency. Hence a deterministic system can be free, contrary to (3). Hence (2) and (3) are not going to be both true, and so we have (1).

We perhaps don’t even need the thought experiment of modifying Hal to argue for a problem with (2) and (3). Hal’s actions are at the mercy of the TRNG. Now, the output of the TRNG is not under Hal’s rational control: if it were, then the TRNG wouldn’t be truly random.

Objection 1: While Hal’s own algorithms, after the change, would be deterministic, the world as a whole would be indeterministic. And so one can still maintain a weaker incompatibilism on which freedom requires indeterminism somewhere in the world, even if not in the agent.

Response: Such an incompatibilism is completely implausible. Being subject to random external vagaries is no better for freedom than being subject to determined external vagaries.

Objection 2: It really does make a big difference whether the source of the randomness is internal to Hal or not.

Response: Suppose I buy that. Now imagine that we modify Hal so that at the very first second of its existence, before it has any thoughts about anything, the software queries a TRNG to generate a supply of random numbers sufficient for all subsequent algorithmic use. Afterwards, instead of calling on a TRNG, Hal simply takes one of the generated random numbers. Now the source of randomness is internal to Hal, so he should be free. And, strictly speaking, Hal thus modified is not a deterministic system, so he is not a counterexample to compatibilism. However, an incompatibilism that allows for freedom in a system all of whose indeterminism happens prior to any thoughts that the system has is completely implausible.

Objection 3: The argument proves too much: it proves that nobody can be free if compatibilism is false. For whatever the source of indeterminism in an agent is, we can label that “a TRNG”. And then the rest of the argument goes through.

Response: This is the most powerful objection, I think. But I think there is a difference between a TRNG and a free indeterministic decision. In an indeterministic free computer, the reasons behind a choice would not be explanatorily relevant to the output of the TRNG (otherwise, it’s not truly random). We will presumably have some code like:

if (TRNG() < weightOfReasons(A)/(weightOfReasons(A)+weightOfReasons(B))) {
   do A
else {
   do B

where TRNG() is a function that returns a truly random number from 0 to 1. The source of the indeterminism is then independent of the reasons for the options A and B: the function TRNG() does not dependent on these reasons. (Of course, one could set up the algorithm so that there is some random permutation of the random number based on the options A and B. But that permutation is not going to be rationally relevant.) On the other hand, an agent truly choosing freely does not make use of a source of indeterminism that is rationally independent of the reasons for action—she chooses indeterministically on the basis of the reasons. How that’s done is a hard question—but the above arguments do not show it cannot be done.

Objection 4: Whatever mechanism we have for freedom could be transplanted into a computer, even if it’s not a TRNG.

Response: It is central to the notion of a computer, as I understand it, that it proceeds algorithmically, perhaps with a TRNG as a source of indeterminism. If one transplanted whatever source of freedom we have, the result would no longer be a computer.

Renewed invitation to coauthorship

I posted an invitation like this some years ago, but it's time to repost it: I am generally open to coauthoring articles based on blog posts. So if you like a post, and have ideas favorable to it, and you want to coauthor a paper based on it, write me. Tell me a bit about yourself and your ideas for the piece. I would prefer it if you knew the literature (which I often don't know very well--the blog posts are in all sorts of areas, including areas that I don't do active research in, and it wouldn't surprise me if many of them were rehashing ideas that were well known).

In the interests of full disclosure, I should say that last fall, I had two failed projects of this sort. In one, my coauthor, after about two drafts, found a fatal objection to the main argument. In the other, I found a fatal objection to the main argument. Alas, blog posts don't always materialize into good arguments.

Natural event kinds and Frankfurt cases

Choices are transitions from an undecided to a decided state. Suppose choices are a natural kind of event. Then only the right sort of transition from an undecided to a decided state will be a choice.

Here, then, is something that is epistemically possible. It could be that a choice is a kind of thing that can only be produced in only one way, namely by the agent freely choosing. Compare essentiality of evolutionary origins for biological kinds: no animal that isn’t the product of evolution could be a lion. Of course, one can have something internally just like a lion arising from lightning hitting a swamp and one can have a transition from an undecided to a decided state arising from a neuroscientist’s manipulation, but these won’t be a lion or a choice, respectively.

If this is right, then it seems no Frankfurt story can make a choice unavoidable. For to make a choice unavoidable, an intervener would have to be able to cause a choice in case the agent wasn’t going to do it. In other words, there will be no Frankfurt argument against principle of alternate possibilities:

  • If x chose A, then it was causally possible for x not to have chosen A.

This is rather flickery, though: it doesn’t require that x could have chosen non-A.

Sunday, January 14, 2018

Exercise machine USB game controller

I made a USB game controller where game movement and buttons are controlled with a Nunchuck or a Gamecube controller and speed of movement (slider) is controlled with the rotation sensor of an elliptical or exercise bike, as a way to encourage self and family to exercise.

Friday, January 12, 2018

Open theism and "never" facts

Suppose a version of open theism on which facts about future free choices have non-trivial truth values which God doesn’t know. Then here is a disquieting feature of this open theism, given eternal life. It implies that there are truths that God never finds out.

For instance, even in an infinite future, there are free actions that I will never do, but which I will have an opportunity to do on infinitely many days. For instance, perhaps I will never sing Amazing Grace three minutes to midnight on a Tuesday, or drink wine at 7:12 am of a prime-numbered day (numbering, say, from the first day of eternal life), even though both of these are possible. Likely, I will never recite all of War and Peace in French, though I would be free to do so. But such “never” facts facts will always depend on future free actions. Thus, on the variety of open theism under discussion, God will never know these facts. He will always just know an increasing number of “never-yet” facts: Alex has never yet recited War and Peace in French, but maybe he will.

It seems harder to reconcile the existence of facts that God will never know with omniscience than the existence of facts that God does not yet know. If there are facts that God will never know, then there is an aspect of reality that is closed to God. That can’t be right.

It’s worse than that. On this version of open theism, not only are there truths that God never comes to know, but there are truths that God never comes to know but that he can know. Here is an example: Either today I don’t write a blog post or I never recite War and Peace in French (assuming that I won’t recite it). Since God will always know that I do write a blog post today, he won’t know this disjunction, or else he’d be able to figure out from it that I will never recite War and Peace in French. (Cf. this paper.)

This is an uncomfortable position.

Four grades of normative actuality

Here are four qualitatively different grades of the normative actuality of a causal power:

  1. Normative possession (zeroeth normative actuality): x has a nature according to which it should have causal power F. (An adult human who lacks sight still has normative possession of vision.)
  2. First normative actuality: x has a normal (for x’s nature) causal power F. (The human with closed eyes has first actuality of vision.)
  3. Second normative actuality: x exercises the normal causal power F. (The human who sees has second actuality of vision.)
  4. Full (third) normative actuality: x exercises the normal causal power F and achieves the full telos of the causal power. (The human who gains knowledge through seeing has full actuality of vision.)

What I call normative possession is close to Aristotelian first potentiality, but is not exactly the same. The newborn has first potentiality for speaking Greek—namely, she is such that eventually she can come to have the power of speaking Greek—but she does not have normative possession of speaking Greek, since human nature does not specify that one should be able to speak Greek. However, the newborn does have normative possession of language in general.

I think each of the four grades of normative actuality is non-instrumentally valuable, and that the grades increase in non-instrumental value as one goes from zero to three.

Grade zero can carry great value, even in the absence of higher grades. For instance, normative possession of the causal powers constitutive of rational agency makes one be a person (or so I say, following Mike Gorman). And it is very valuable to be a person. This may, however, be a special case coming from the fact that persons have a dignity that other kinds of things do not; maybe the special case comes from the fact that persons need to have a fundamentally different kind of form from other things. For other causal powers, grade zero doesn’t seem to carry much value. Imagine that you found out that (a) normal Neanderthals have the ability to run five hundred kilometers and (b) you are in fact a Neanderthal. By finding out these things, you’d have found out that you have normative possession of the ability to run 500km—but of course, you have no actual possession of that ability. The normative possession is slightly cool, but so what? Unless one has a higher grade of actuality of this ability, simply being the kind of thing that should have that ability does not seem very valuable. And the same is true for abilities more valuable than the running one: imagine that Neanderthals turn out to have Einstein-level mathematical abilities, but you don’t. It would be a bit cool to be of the same kind as these mathematical geniuses (maybe this is a little similar to how it’s cool for a Greek to be of the same nation as Socrates), but in the end it really doesn’t count for much.

Grade one is also valuable even in the absence of higher grades. It makes for the difference between health and impairment, and health is valuable. But I can’t think of cases where first normative actuality carries much non-instrumental value. Imagine that I know for sure that I am going to spend all my life with my eyes tightly closed (e.g., maybe I am hooked up to machine that will kill me if I attempt to open them). It is objectively healthier that I have sight than that I do not. But it seems rational to sacrifice all of my sight for a slight increase in the acuity of touch or hearing, given that I can actually exercise touch and hearing (second or third actuality) while I can’t exercise sight. Even slight amounts of second or third normative actuality seem to trump first normative actuality.

Grade two seems quite valuable, even absent grade three. Here, examples can be multiplied. Sensory perception that does not lead to knowledge can still be well worth having. Sex is valuable even absent successful reproduction. Running on a treadmill can have a value even if does not achieve locomotion. While it seems to be generally true that a great amount of first actuality can be sacrificed for a small amount of second actuality, this is not as generally true with second and third actuality. One might reasonably prefer to run two kilometers on a treadmill—even for the non-instrumental goods of the exercise of leg muscles—instead of running two meters on the ground.

All of the lower grades of normative actuality derive their value in some way from the value of full normative actuality. But full normative actuality does not always trump grade two. It seems to generally trump grade one. Grade zero is special: most of the time it does not seem to carry much value, but it does in the case where it constitutes personhood. (Maybe, though, the dignity of personhood shouldn’t be thought of in terms of value.)

Wednesday, January 10, 2018

The mystery of God

Suppose you have never heard music, and you are watching a video of a superb ballet, with the sound turned off. And then someone turns the sound on. You now know a dimension of the dance you wouldn’t have expected or thought of. It transforms your understanding of the ballet radically.

Similarly, but more radically, when we humans learned that the perfectly one God is three persons, we learned something that we would not have expected, something that not only we wouldn’t have thought of, but something that we would have likely denied is at all possible. It is something that should radically (in both the etymological and the common senses of the word) transform all of our understanding of God. Of course, what we learned turns out to be logically compatible with the doctrine of God’s unity, but that it was compatible is a part of the surprise.

I suspect that similar transformations of our understanding of God await in heaven. Doctrines that are related to our doctrinal understanding of God as the doctrine of the Trinity is to the unity and simplicity of God. Experience that are radically different in kind from anything we have had.

But is it not plausible that God is such that any finite understanding of him is subject to such transformation? If so, then this gives us one way of countering the “eternal ennui” worry about heaven. For such transformations of our understanding of, and hence of our loving relationship with, God could occur for eternity then.

A complication in the stone argument


  1. If God can create a stone he can’t lift, there is something he can’t do—namely, lift the stone.

  2. If God cannot create a stone he can’t lift, there is something he can’t do—namely, create the stone.

  3. So, there is something God can’t do.

  4. So, there is no omnipotent God.

A standard way out of this, which I think is basically right, is that (3) is compatible with God’s being omnipotent as long as the thing God can’t do is metaphysically impossible.

But I want to note a rarely noted thing about the argument, which annoys me when I teach the argument to undergraduates because it is a red herring, but one that complicates the presentation.

The following is plausible:

  1. If God were to create a stone he can’t lift, there would be something he couldn’t do—namely, lift the stone.

But (1) does not follow from (4) without further assumptions.

One way to get around this issue is to weaken the conclusion of the argument to the claim that possibly there is something God can’t do. That might create trouble for God’s essential omnipotence. Of course, I do accept that God is essentially omnipotent. But it still weakens the conclusion. And it’s a nuisance in teaching to have to get into essential omnipotence when dealing with the argument.

Tuesday, January 9, 2018

Variety and ontology

A major part of the ontologist’s dream has always been to find a small number of fundamental categories—maybe one, maybe two or three or maybe ten—into which everything falls.

Aristotle says somewhere that the philosopher knows all things—in general terms. That’s the kind of knowledge the ontologist’s dream accomplishes. But I worry: isn’t there a deep hubris in thinking we can categorize fundamental reality? And aren’t we destroying the deep richness of reality by pushing into into a handful of categories?

Well, maybe not. After all, all books could be seen as finite sequences of a small number of symbols. (Recall the lovely argument in Plato’s Euthydemus that one can’t learn from books, because if you don’t know the alphabet, you can’t read, and if you know the alphabet, you already know all that is in the books, namely letters.) And yet among these arrangements—all of which are ontologically the same sort of thing—there are the Summa Theologiae, The Deluge, Hamlet, the Psalms, the best of the scientific literature… and the latest tweets from world leaders, too. One doesn’t destroy the richness of literature by noting that ontologically it’s all of a piece. Being all of a piece ontologically is compatible with great variation.

That said, I still have the worry. While there is great richness in literature, culture be impoverished if there weren’t painting, sculpture, dance, etc. Similarly, even if there can be enormous richness among monads, their apperceptions and their appetitions, wouldn’t reality be impoverished if monads, perceptions and appetitions were all there is?

Monday, January 8, 2018

The pastoral problem of double effect reasoning

As part of a just war, Alice drops a bomb on the enemy military headquarters. Next door to the enemy headquarters are the world headquarters of a corporation that Alice knows has been responsible for enormous environmental degradation, and the bomb will level the whole block. Alice finds it very difficult not to jump in glee at the death of the immoral CEO.

It would be murder, however, for Alice to drop the bomb in order to kill the CEO. It would still be murder even if she dropped the bomb in part in order to do so. But it’s hard for Alice not to be motivated by the death of the CEO, and hence Alice—who is deeply morally sensitive—finds it difficult not to feel guilty of murder.

There are two interrelated pastoral problems here. First, how can Alice avoid being a murderer—how can she avoid intending to kill the hated CEO? Second, if she succeeds in avoiding being a murderer, how can she avoid feeling like a murderer?

Reflecting on counterfactuals may help Alice.

  1. Would I still drop the bomb here if the military leaders were elsewhere and I could get away with it?

  2. Would I still drop the bomb here if the CEO were elsewhere but the military leaders were here?

If the answer to (1) is “yes” or the answer to (2) is “no”, she very likely is intending to kill the CEO. But even if the answer to (1) is “no” and that to (2) is “yes”, that does not prove that the CEO’s presence isn’t contributing to her intention. Perhaps the CEO’s presence isn’t enough to motivate her by itself, but it nonetheless contributes to her motivations. One could try to tease this apart through further counterfactuals.

  1. Is there a personal cost such that (a) if the CEO were elsewhere but the military leaders were here, I would not drop the bomb on account of the cost, but (b) if the CEO were here along with the military leaders, I would drop the bomb notwithstanding the cost?

A positive answer suggests that she is intending to kill the CEO. But the counterfactual (3) is hard to evaluate, and it is not clear that it is epistemically accessible to Alice.

Perhaps these counterfactuals would be more helpful:

  1. If I could aim the bomb in such a way that I would kill the military leaders but not the CEO, would I?

  2. If after dropping the bomb, I could call for an ambulance to save the CEO, would I?

Answers to these two questions seem imaginatively accessible. I think a positive answer to both questions is strong evidence that the CEO’s death is not intended. And it seems to me that (4) and (5), unlike (3), are pretty accessible to Alice, they could help with the problem of not feeling like a murderer.

Interestingly, positive answers to (4) and (5) are not logically necessary for Alice not to be a murderer. Suppose Alice were callous and did not care either way about the CEO’s death. Then she wouldn’t be intending the CEO’s death—any more than she would be intending to make cracks in the sidewalk—but she wouldn’t go to any trouble to prevent his death.

Positive answers to (4) and (5) would indicate that Alice has on balance a negative attitude to the CEO’s death, despite uncontrollable feelings of glee. And it seems that to deal with the pastoral problem of double effect, what one needs is to have not just a neutral but a negative attitude to the evil. Of course, guilt at the CEO’s death may survive reflection on (4) and (5). But (4) and (5) could be a helpful step.

One writer on double effect said that for a double effect justification to apply one needs to do something to prevent or lessen the unintended evil. That kind of action could indeed help with the pastoral problem. But sometimes no action is possible—in that case, reflection on counterfactual action may help.

Still, I think even positive answers to (4) and (5) can leave a residual worry, especially in a scrupulous person. Alice might worry that she really does want the CEO dead, and while she would aim differently or call for an ambulance if she could, that would be out of duty rather than out of desire, and hence she still is intending the CEO to be dead. I think this is a mistaken worry. If she is thus moved by duty, then it seems that duty is structuring Alice’s intentions in a way that makes her not intend to kill the CEO—even if she uncontrollably rejoices at the immoral CEO’s death.

Counting and chance

A countably infinite number of people, including me, is about to roll fair indeterministic dice? What probability should I assign to rolling six?

Obviously, 1/6.

But suppose I describe the situation thus: “There are two equally sized groups of people. How likely is it that I am in the former rather than the latter?” (After all, I know that infinitely many will roll six and infinitely many won’t, and that it’ll be the same infinity in both cases.) So why 1/6, instead of 1/2, or undefined?

Here’s what I want to say: “The objective chance of my rolling six is 1/6, and objective chances are king, in the absence of information posterior to the outcome.” Something like the Principal Principle should apply. And it should be irrelevant that there are infinitely many other people rolling dice.

If I say this, then I may have to deny both the self-sampling assumption and the self-indication assumption. For if I really consider myself to be a completely randomly chosen person in the set of die rollers, or in some larger set, in the self-indication cases, it seems I shouldn’t think it less likely that I rolled six than that I didn’t, since equal numbers did each.

It looks to me that we have two competing ways of generating probabilities: counting and objective chance. I used to think that counting trumped objective chance. Now I am inclined to think objective chance trumps counting, and counting counts for nothing, in the absence of objective chance.

Wednesday, January 3, 2018

Badness and deontological prohibitions

The following form of argument has some initial plausibility:

  1. Ordinarily, action type A is no better than action type B.

  2. So, if there is no deontological prohibition against A, there is no deontological prohibition against B.

But here’s an interesting fact. One can have pairs of action types A and B such that:

  1. under ordinary circumstances, A is worse than B, but

  2. there is a deontological prohibition against B but not against A.

For instance, let A be a train engineer’s choosing not to brake a slow moving train ahead of a section of track on which there are ten innocents tied up. Let B be the train engineer’s shooting one innocent dead (knowingly, without divine permission, etc.).

Under ordinary circumstances, A is worse than B. But if Alice reliably informs the train engineer that she will murder fifty people if the engineer brakes, the engineer is permitted (and probably obligated) to refrain from braking. Hence there is no deontological prohibition against A. But if Alice informs the train engineer that she will murder fifty people if the engineer refuses to shoot the innocent, the engineer must still still refuse. There is a deontological prohibition against B.

So, while there is some correlation between ordinary worseness and deontological prohibition, that correlation has exceptions.