One objection to some solutions to the problem of evil, particularly to sceptical theism, is that if there are such great goods that flow from evils, then we shouldn’t prevent evils. But consider the following parable.
I am an air traffic controller and I see two airplanes that will collide unless they are warned. I also see our odd security guard, Jane, standing around and looking at my instruments. Jane is super-smart and very knowledgeable, to the point that I’ve concluded long ago that she is in fact all-knowing. A number of interactions have driven me to concede that she is morally perfect. Finally, she is armed and muscular so she can take over the air traffic control station on a moment’s notice.
Now suppose that I reason as follows:
- If I don’t do anything, then either Jane will step in, take over the controls and prevent the crash, or she won’t. If she does, all is well. If she doesn’t, that’ll be because in her wisdom she sees that the crash works out for the better in the long run. So, either way, I don’t have good reason to prevent the crash.
This is fallacious as it assumes that Jane is thinking of only one factor, the crash and its consequences. But the mystical security guard, being morally perfect, is also thinking of me. Here are three relevant factors:
- C: the value of the crash
- J: the value of my doing my job
- p: the probability that I will warn the pilots if Jane doesn’t step in.
Here, J>0. If Jane foresees that the crash will lead to on balance goods in the long run, then C>0; if common sense is right, then C<0. Based on these three factors, Jane may be calculating as follows:
- Expected value of non-intervention: pJ+(1−p)C
- Expected value of intervention: 0 (no crash and I don’t do my job).
Let’s suppose that common sense is right and C<0. Will Jane intervene? Not necessarily. If p is sufficiently close to 1, then pJ+(1−p)C>0 even if C is a very large negative number. So I cannot infer that if C<0, or even if C<<0, then Jane will intervene. She might just have a lot of confidence in me.
Suppose now that I don’t warn the pilots, and Jane doesn’t either, and so there is a crash. Can I conclude that I did the right thing? After all, Jane did the right thing—she is morally perfect—and I did the same thing as Jane, so surely I did the right thing. Not so. For Jane’s decision not to intervene may be based on the fact that her intervention would prevent me from doing my job, while my own intervention would do no such thing.
Can I conclude that I was mistaken in thinking Jane to be as smart, as powerful or as good as I thought she was? Not necessarily. We live in a chaotic world. If a butterfly’s wings can lead to an earthquake a thousand years down the road, think what an airplane crash could do! And Jane would take that sort of thing into account. One possibility was that Jane saw that it was on balance better for the crash to happen, i.e., C>0. But another possibility is that she saw that C<0, but that it wasn’t so negative as to make pJ+(1−p)C come out negative.
Objection: If Jane really is all-knowing, her decision whether to intervene will be based not on probabilities but on certainties. She will know for sure whether I will warn the pilots or not.
Response: This is complicated, but what would be required to circumvent the need for probabilistic reasoning would be not mere knowledge of the future, but knowledge of conditionals of free will that say what I would freely do if she did not intervene. And even an all-knowing being wouldn’t know those, because there aren’t any true non-trivial such conditionals.
Standard sceptical theism focuses on our ignorance of the realm of values. I want to suggest a different kind of sceptical response to an evil E. This response identifies a good G such that it is clear that the occurrence of a good relevantly like G logically requires the permission of an evil relevantly like E, but instead the scepticism is in that we have on balance no significant evidence against the conjunction:
- G obtains and
- G outweighs E and
- there is no alternative good G* dissimilar from G that doesn’t require anything nearly as bad as E and that would be more or approximately equally worth having.
If the triple conjunction holds then G justifies E, and so if we have no significant evidence against the triple conjunction, we have no significant evidence that E is unjustified. (Yeah, one can dispute my implicit transfer principle, but something like that should work.)
And it’s fairly easy to generate examples of G that do the job for particular E. Take Rowe’s case of the horrendous evil inflicted on Sue. Let G be Sue’s having forgiven E’s perpetrator. We have no significant evidence against the conjunction (1)-(3), then. Granted, we may have significant evidence that G did not obtain in this life, though even that is probably a stretch, but we have no balance no significant evidence that G didn’t obtain in an afterlife. My intuitions strongly favor (2)–there is a way in which forgiveness seems to defeat evil–but in any case we have no significant evidence against (2). As for (3), granted there are many great moral goods that don’t require anything nearly as bad as E, but I don’t think we have on balance significant evidence that these goods are roughly as good as or better than G. Now, of course, it can be the case (whether due to a logical contradiction or dwindling probabilities) that we don’t have significant evidence against any conjunct but we do have significant evidence against the conjunction. But I don’t think this happens here.
Let’s grant that if sceptical theism is true, then for any evil E, we have no reason to think that the prevention of E will lead to an overall better result than letting E happen, so the fact that we do not see God preventing E is not evidence against the existence of God, since we have no more reason to think that God would prevent E than that he would not. The standard objection is that then we have no reason to prevent E, since we have no reason to think that the overall result will be better if we prevent E.
This objection is mistaken. Suppose I offer you a choice of two games–you must play the one or the other. Each game lasts forever. In Game A, you get pricked in the foot with a thorn on the first move. In Game B, nothing happens to you on the first turn. And that’s all you know. (You don’t know if God exists or anything like that.) Which game should you choose?
You should probably say you have the same probability of doing better by playing Game A as by playing Game B. Why? Well, let Games A* and B* be Games A and B minus their first steps. You know nothing about Games A* and B*. (You don’t know if the first step is a sign of what it is to come, or maybe the sign of the opposite of what is to come, or completely uncorrelated with what comes later.) Now given a pair of infinitely long games about which you know nothing, the overall difference in outcome utility can be minus infinity, plus infinity, undefined, or finite. The likelihood that this difference would be within a pinprick is zero. But if the difference is not within a pinprick, then A is better than B iff A* is better than B*, and B is better than A iff B* is better than A*. Since we know nothing about A* and B*, we should not say that it’s more likely that A* is better than B*, nor the other way around. So, the probability that A would give a better result than B is the same as the probability that B would give a better result than A. (This calculation assumes Molinism. Without Molinism, it only works for deterministic games, or as an intuition-generator.)
Now if the reasoning in the anti-sceptical-theism argument is sound, you have no reason to choose Game B over Game A, since you have equal probabilities of doing better with A as with B. But in fact, despite this equality, you should choose Game B. For since you know nothing about what Games A* and B* are like, the expected value of Games A* and B* should be the same–even if it’s infinite, or even if it’s undefined. (Think of doing this with non-standard arithmetic.) So you have two options: first a pinprick, and then something with a certain (perhaps undefined) expected value; or just something with that same (perhaps undefined) expected value. And of course you should choose the latter–you should avoid the pinprick. The lesson here is that while beliefs are guided by probabilities, action is guided by expected values.