Sceptical theism and the undercuts-our-moral-life objection
February 17, 2009 — 21:12

Author: Alexander Pruss  Category: Problem of Evil  Tags: , , , ,   Comments: 15

Let’s grant that if sceptical theism is true, then for any evil E, we have no reason to think that the prevention of E will lead to an overall better result than letting E happen, so the fact that we do not see God preventing E is not evidence against the existence of God, since we have no more reason to think that God would prevent E than that he would not. The standard objection is that then we have no reason to prevent E, since we have no reason to think that the overall result will be better if we prevent E.

This objection is mistaken. Suppose I offer you a choice of two games–you must play the one or the other. Each game lasts forever. In Game A, you get pricked in the foot with a thorn on the first move. In Game B, nothing happens to you on the first turn. And that’s all you know. (You don’t know if God exists or anything like that.) Which game should you choose?

You should probably say you have the same probability of doing better by playing Game A as by playing Game B. Why? Well, let Games A* and B* be Games A and B minus their first steps. You know nothing about Games A* and B*. (You don’t know if the first step is a sign of what it is to come, or maybe the sign of the opposite of what is to come, or completely uncorrelated with what comes later.) Now given a pair of infinitely long games about which you know nothing, the overall difference in outcome utility can be minus infinity, plus infinity, undefined, or finite. The likelihood that this difference would be within a pinprick is zero. But if the difference is not within a pinprick, then A is better than B iff A* is better than B*, and B is better than A iff B* is better than A*. Since we know nothing about A* and B*, we should not say that it’s more likely that A* is better than B*, nor the other way around. So, the probability that A would give a better result than B is the same as the probability that B would give a better result than A. (This calculation assumes Molinism. Without Molinism, it only works for deterministic games, or as an intuition-generator.)

Now if the reasoning in the anti-sceptical-theism argument is sound, you have no reason to choose Game B over Game A, since you have equal probabilities of doing better with A as with B. But in fact, despite this equality, you should choose Game B. For since you know nothing about what Games A* and B* are like, the expected value of Games A* and B* should be the same–even if it’s infinite, or even if it’s undefined. (Think of doing this with non-standard arithmetic.) So you have two options: first a pinprick, and then something with a certain (perhaps undefined) expected value; or just something with that same (perhaps undefined) expected value. And of course you should choose the latter–you should avoid the pinprick. The lesson here is that while beliefs are guided by probabilities, action is guided by expected values.

Next, let’s consider an analogy to the sceptical theism case. Suppose we observe Fred beginning to play. We are told by a friend that p, where p is the proposition that Fred is an omniscient and perfectly self-interestedly rational being, who in particular knows exactly what the outcomes of every step in Games A and B would be. (Maybe the games are deterministic, or maybe Fred has middle knowledge.) We observe that Fred chooses Game A. He gets the pinprick. And at that point we stop being able to observe. Now, if we knew that Fred chose the game that overall paid off less well, we would have good reason to deny p. Does the fact that Fred chose the pinprick give us any evidence against p? No! For the probability that Game B is better than Game A is no greater than the probability that Game A is better than Game B. So we have, in fact, no evidence that Fred chose the game that paid off less well, and no evidence against p. Nonetheless if we were asked to choose between the games, without having been able to observe Fred’s choice, we would have been irrational to choose Game A.

The same is true if we are dealing with non-self-interested rationality. Suppose that in Game A, an innocent person loses a leg on the first move, and in Game B, nothing happens on the first move. We have good moral reason to choose Game B. Nonetheless, if we know nothing more about the two games, we have no reason to think that Game B is more likely to result in a better overall outcome than Game A. But that Patricia chooses Game A is no evidence against the claim that she is an omniscient and a maximizer of the good. Nor can we say anything Roweian like: “We have no reason to think there is a reason to prefer Game A* over Game B*, so we should assume that there is no reason to prefer A* over B*, in which case Patricia is not both omniscient and a maximizer of the good.” For that is mistaken.

The same reasoning shows that even if our world is deeply chaotic, so that any tiny event might have enormous consequences (I break to avoid hitting a pedestrian, and that causes an earthquake in Japan next week, say), still we will have reason to prevent evils.

I should make one qualification. When I said that the probability that Game A is better than Game B is no less than the probability that B is better than A, one might object that the probability that B is better is infinitesimally greater. That may in fact be right. But the sceptical theist can agree that it is infinitesimally less likely that God would permit the evil E than that God would allow it. For while that admission will make the inductive argument from evil decrease the probability of the existence of God, it will only decrease it infinitesimally. And since we have in fact only observed a finite number of evils, even if one adds up all the decreases from all the evils, the total decrease will only be literally infinitesimal. Hence, if before the argument from evil, the probability of God’s existence was non-infinitesimally greater than 0.5 (say, it was 0.51), it will still be non-infinitesimally greater than 0.5 (if it originally was 0.51, it will now be 0.51-epsilon, where epsilon is an infinitesimal, and 0.51-epsilon is greater than, say, 0.509999), since an infinitesimal is smaller than every standard positive real number. The only time a finite number of infinitesimal disconfirmations could make a real difference is if the initial probability was exactly 0.5, or within an infinitesimal of 0.5, and even then the difference shouldn’t be that significant, I think.

Comments:
  • Anonymous

    Replace “pinprick” and “an innocent person loses a leg” with “a trillion years of nonstop hellish agony for everybody”.
    It is now clearly an understatement to say that Fred, having chosen Game A, is probably not an omniscient and perfectly self-interestedly rational being. And likewise it is now clearly an understatement to say that Patricia, having chosen Game A, is probably not an an omniscient maximizer of the good.
    But, if I’m not mistaken, consistency demands that you stick to your guns, contending that their choice of Game A provides no evidence at all. So hasn’t something gone seriously wrong?

    February 18, 2009 — 3:28
  • Mike Almeida

    Nonetheless if we were asked to choose between the games, without having been able to observe Fred’s choice, we would have been irrational to choose Game A.
    Alex, nice post! I’m worried about the remark I excerpted above. It looks like you are changing the criteria for choice when you have both sequences, A & B, having (for all we know) the same expected value and yet urging that we should not choose A. It is a little like saying, well, if you wanted to make the moral choice, you should consider the sequences A and B impartially. So, you must consider the interests of everyone affected in each sequence A and B equally. But if you want to slightly favor yourself, then you should give slightly greater weight to your own interests in each of the sequences. Given the second standard, you should choose B. Given the first standard, you should be indifferent.

    February 18, 2009 — 7:30
  • Mike:
    I am not sure I understand your remark. In the original setting, where the game concerns only you, you should choose B. If you like, we can supplement my initial story with the claim that you’re the only person there is, or the only person affected by the game.
    In the modified story, where it is someone else who is at stake, you should also choose B.
    So I don’t see the shift in standards–but I guess I am missing something.
    Anonymous:
    Yeah, it’s counterintuitive. But since you know nothing at all about Games A* and B*, for all you know in step 17 of B*, a trillion innocent people suffer 10^100 years of hellish agony.

    February 18, 2009 — 7:40
  • Mike Almeida

    If you like, we can supplement my initial story with the claim that you’re the only person there is, or the only person affected by the game.
    I’m still not seeing why I should choose B. In the case now under discussion. If I’m concerned only about my own well being, and A and B are, relative to that standard, equally good, then why choose B? It is as though you want A and B to equally satisfy the standard and at the same time for B to be preferable to A relative to the same standard.

    February 18, 2009 — 8:25
  • But they’re not quite the same. Here’s one way to see this. The expected value of choosing A is -V + E[U(A*)], where -V is the value of the pinprick, U(A*) is the utility of A*, and E[] is expected value (relative to appropriate epistemic probabilities). The expected value of choosing B is E[U(B*)]. But the salient information we have on A* and B* is exactly the same (unless we have some information that the first result is likely to be followed by similar ones, or something like that; but inductive reasoning like this fails in games, because games are sometimes set by tricksters). Thus, E[U(A*)]=E[U(B*)]. Let x = E[U(A*)]. Therefore, we should opt for B, as its expected value is x, while the expected value of A is x-V. Now, this may seem odd if x turns out to be infinite. But then we pull in some non-standard analysis. And, anyway, if we have no information whatsoever on A* and B*, it seems reasonable to suppose that the expected value of the nth step of the game, where n is more than 1, is zero, and then x = 0.

    February 18, 2009 — 9:13
  • Mike Almeida

    Ok, in the post you say,
    The standard objection is that then we have no reason to prevent E, since we have no reason to think that the overall result will be better if we prevent E.
    Right. This is one good reponse to skeptical theism, I think. But you add that there are games A and B such that,
    You should probably say you have the same probability of doing better by playing Game A as by playing Game B.
    So, clearly I have NO REASON TO CHOOSE B THAT APPEALS TO HOW GOOD IT WILL BE FOR ME. I mean, you have ruled out that sort of criterion of choice by stipulating that A and B are equivalent relative to that standard. But now you seem to be saying that B is better than A,
    Therefore, we should opt for B, as its expected value is x, while the expected value of A is x-V.
    The proper rejoinder here is that x = x-V, according to the initial post, so pointing out that the value of A = x-V and the value of B = x, makes no difference to what I ought to do.

    February 18, 2009 — 9:56
  • Mike:
    I didn’t say that x = x – V in the initial post. 🙂 Now maybe I said something that entails it, but I am not aware of having done so.
    Maybe you’re thinking that x is infinite, and so x = x – V. But saying that x = x – V is a bad way of handling infinities in prudential calcualtions. One way to see that x = x – V is a bad way of handling infinities is this. Suppose that George is promised eternal bliss (or eternal damnation) after he dies. If we use the x = x – V rule, we get the absurd conclusion that George has no self-interested reason to refrain from intense self-torture for the rest of his life, since the utility of his life is plus infinity no matter what. So, we need a calculus of infinities on which x – V is strictly less than x. We can either do this informally, or we can formalize it with non-standard analysis.
    A related way of looking at the issues is that infinitesimal differences in probability in practice make no difference for belief but do make a difference for prudence when dealing with infinitely long sequences.

    February 18, 2009 — 10:08
  • Here’s another way of seeing the point. Suppose that you’re offered a choice between the following two games. If you choose Game A, you will be tortured for ten years. If you choose Game B, you will have a nice ten years. After that, in both cases, every ten years a coin will be tossed. If it’s heads, you will be tortured for ten years. If it’s tails, you’ll have a nice ten years.
    It is not more probable (except maybe by an infinitesimal) that you’ll be better off in the long run choosing A rather than choosing B.
    But you should still choose A. One way to see that you should choose A is this. The situation is rationally equivalent to the following. First you learn that starting from ten years from now, every ten years for eternity a coin will be flipped, and if it’s heads, you’ll be tortured for ten years, and if it’s tails, you’ll have a nice ten years. OK, this is a pretty bad prospect. Now you’re asked: “Would you like to be tortured for the next ten years, or not?” Surely the right answer is still “No”, regardless of your grim future. (This is pretty close to–but not exactly–a domination argument.)

    February 18, 2009 — 10:17
  • I should, of course, have said: “It is not more probable (except maybe by an infinitesimal) that you’ll be better off in the long run choosing B rather than choosing A.
    “But you should still choose B. One way to see that you should choose B is this.”
    As it was, I sounded like a masochist.

    February 18, 2009 — 10:20
  • Mike Almeida

    It is not more probable (except maybe by an infinitesimal) that you’ll be better off in the long run choosing A rather than choosing B. But you should still choose A.
    I don’t get it. First you tell me that it makes no difference whether I choose A or B in the long run. I know I am in this for just that, the long run. So how could it be more rational to choose A?
    I grant you there is an intuitive argument that appeals to preferring the temporally nearer goods. But it is not rational to prefer goods just because they are nearer in time.

    February 18, 2009 — 10:53
  • Mike,
    But it does make a difference in the long run. The expected values differ by V. It’s just that the expected values are, I expect, not finite numbers.
    Alex

    February 18, 2009 — 11:31
  • Mike Almeida

    Can you make these two true together, please. I think you’re saying them both.
    1.But it does make a difference in the long run. The expected values differ by V.
    2.It is not more probable (except maybe by an infinitesimal) that you’ll be better off in the long run choosing A rather than choosing B. But you should still choose A.

    February 18, 2009 — 14:40
  • Yes, (1) and (2) can be true together.
    Here is a case where (1) and (2) are true together, though this case is not analogous to the evil case. Game A: 1/100 chance of winning a million dollars, and 99/100 chance of no effect. Game B: 99/100 chance of winning a dollar, and 1/100 chance of no effect. Then, probably you’ll be better off choosing Game B–most likely by choosing Game A, you’ll get nothing, and by choosing Game B, you’ll get a dollar. But you should choose Game A, because the expected value of A ($10,000.00) is much bigger than that of B ($0.99).

    February 18, 2009 — 15:12
  • Mike Almeida

    This is a strange sense of ‘probably you’ll be better off’. First, it conflicts with maximizing expected utilty. It would recommend that I take the gamble [.9($1), .1(0)], over the gamble [.5($100), .5(0)], at cost in each case of $.02, since the in former I’m probably better off. That is way too conservative. And even in cases where expected utility is the same, it isn’t clear how I’m supposed to be ‘probably better off’ in taking, say, a .9($5) over a .5($9). I can’t see how that course recommends itself over the (probabilistically) less conservative, but value-wise more conservative, course of taking a slightly larger risk for a bigger payoff. In the first case you might say “at least I’m pretty sure I’ll get something, however small the return”; in the second case you might hear “at least I’m sure I’ll get a good return, however small the chance”. One is as rational as the other.

    February 19, 2009 — 7:25
  • Mike:
    Suppose Molinism, or suppose that all probabilities are epistemic and determinism holds.
    Let U(A) and U(B) be random variables equal to what in fact would be the long term utility of A and B respectively. Let a = P(U(A) > U(B)) and b = P(U(B) > U(A)). My claim was that neither a is larger than b, nor b is larger than a, except perhaps by an infinitesimal. (I don’t just say that a=b, because I want to leave open the possibility that both are undefined.) This is compatible with E[U(B)] > E[U(A)].
    Now, if we are judging whether an allegedly omniscient being chose well, what is relevant are the relative values of a and b. But if we are judging whether someone with limited knowledge has chosen rightly, it is expectations (or, more precisely, conditional expectations given her knowledge) that are at issue.

    February 19, 2009 — 8:28