Almeida & Oppy (2003) argue that if the considerations deployed by skeptical theists are sufficient to undermine evidential argument from evil then those considerations are also sufficient to undermine inferences that play a crucial role in ordinary moral reasoning. They consider some specific apparent evil that one could easily prevent and then they reason:
“Plainly, we should also concede by parity of reason that, merely on the basis of our acceptance of ST1-ST3, we should insist that it is not unlikely that there is some good which, if we were smarter and better equipped, we could recognize as a reason for our not intervening to stop the event. That is, our previous concession surely forces us to allow that, given our acceptance of ST1-ST3, it is not unlikely that it is for the best, all things considered, if we do not intervene. But, if we could easily intervene to stop the heinous crime, then it would be appalling for us to allow this consideration to stop us from intervening. Yet, if we take the thought seriously, how can we also maintain that we are morally required to intervene? After all, as a result of our acceptance of ST1-ST3, we are allegedly committed to the claim that it is not unlikely that it would be for the best, all things considered, if we did not do so.” (506)
I don’t think this is right. Consider the following analogy:
Suppose Sam is the president of Acme Anvil Company. Sam discovers some systemic abuse is occurring in his company (anvils falling from the sky…) and he has the power to stop it. Yet, Sam doesn’t stop it because he wants to see how is mid-level managers respond once they discover it. The mid-level managers discover the abuse and then reason “well, Sam knows about this and he’s doing nothing. So there’s probably a reason he has that justifies his not preventing this. So we have a reason not to prevent this.”
Intuitively, this is bad reasoning on part of the mid-level managers. They should prevent the abuse even though they know that Sam knows about it and that he has the power to prevent it. Whatever Sam’s reasons are, they don’t carry over to reasons for the mid-level managers.
The goal of this post is to layout various skeptical theistic theses. Skeptical theism is the position that we should be leery of our ability to limn the limits of God’s reasons for permitting some cases of horrendous evils. Bergmann casts skeptical theism as responding to Rowe’s noseeum inference: (P) No good we know of justifies God in permitting E1 and E2 (the bambi and sue cases) to (Q) No good at all justifies God in permitting E1 and E2. From (Q) one deduces that there’s no God (~G).
Here’s a list of various skeptical theistic theses in the order of strongest to weakest.
First group: evidential irrelevance
1. Necessarily, for any evil, P(G|e)=P(G).
(1) claims that necessarily evil is evidentially irrelevant to the existence of God. (1) is clearly subject to counterexample: let e be a trillion sentient creatures suffer endless torment. Skeptical theism need not be committed to denying that this would be evidence against theism.
2. For any evil, necessarily, P(G|e)=P(G).
(2) restricts the evils to evils that occur in the actual world and claims that they are such that necessarily they are evidentially irrelevant to the existence of God.
3. For any evil, P(G|e)=P(G).
(3) drops the embedded necessity operator. Depending on how one understands the nature of the P function and the nature of evil, 2 and 3 could be equivalent.
4. For E1 and E2 (and similar evils), P(G|E1&E2)=P(G).
This further restricts the evils in our world to those like the Bambi and Sue cases. One advantage of this restriction is that it allows skeptical theism to be viewed as a special case defense and not a general strategy defense. (Also, we can add back in the necessity operator to get further distinctions here).
Second group: Relevance but not significance (my gloss: “evil isn’t a game changer”)
An evil is a game-changer if it can tip the balance of evidence in favor of atheism or agnosticism. A no-game changer version of skeptical theism says that while evil can detract from the probability of God it can’t be the proverbial straw that brought the camel’s back. I shall represent the ‘no-game changer thesis’ by using ‘â’. This represents that the probabilities are closely similar.
5. Necessary, for any evil, P(G|e)âP(G).
6. For any evil, necessarily, P(G|e)âP(G).
7. For any evil, P(G|e)âP(G).
8. For E1 and E2 (and similar evils), P(G|e)âP(G).
To undermine Rowe’s inference all Bergmann and company need is 8. Thus, the skeptical theist can easily recognize that there are many evils that could occur that would significantly distract from the probability of theism. Also, 8 is interesting because it allows skeptical theism to be viewed as a special case defense that can be run along with other defenses–the free will defense, the soul-making defense, the value of natural laws, etc.
Aficionados of the fine-tuning argument will be familiar with the normalizability problem presented by the McGrews and Vestrup in their (2001) Mind article. The normalizability problem is that one cannot make sense of probabilities within an infinite space of possibilities in which each possibility is equi-probable. Suppose, for illustration, that there is a lottery on the natural numbers. For each natural number it’s possible that it wins but no natural number has any greater chance of winning than any other natural number. If we assign each natural number some very small finite chance of winning then the total space of possibilities sums to a probability greater than 1 (which is to say that the space of possibilities isn’t normalizable). Because talk of probabilities makes sense only if the total outcome space equals 1 then we can’t make sense of probabilities in this case. One move here is to deny countable additivity, the claim that you can sum probabilities in an infinite space. Another move is to introduce infinitesimals to recover a positive probability without denying countable additivity. Yet another move is to hold that the space of possibilities is uneven in terms of probabilities. The basic idea is that the probabilities in an infinite range is curved and not a straight line. I don’t want to talk about any of these moves. Instead I want to focus on a curious result that arises from the normalizability problem. Hence the title of the post: the Normalizality Problem problem (or, the NP problem). To begin let’s step back a bit and ask why the fine-tuning argument is a fairly recent newcomer among the catalog of arguments for theism. The basic idea is that prior to the scientific developments in the beginning of the 20th century the universe as a whole was conceived to be too vague to present any arguments from its nature. One couldn’t sensibly talk about specific properties of the universe as a whole and thus there was no sense to be made of the universe as a whole being fine-tuned. But all that changed with the discovery that the universe was expanding and that the initial state of the universe must have had very specific properties and was governed by specific laws with various mathematical constants. Thus, it became appropriate to consider why the initial conditions of the universe and the constants of the laws had the specific values it in fact had. These developments gave rise to the fine-tuning argument and also lots of concern among physicists to find a more fundamental theory to remove some of the improbability that just this ensemble of conditions and constants occurs. In short, it looks like there’s a significant change in our epistemic position vis-Ã -vis the nature of the universe as a whole. Prior to the scientific developments in the 20th century the nature of the universe as a whole was too vague to give rise to probability intuitions, but after those developments it was.
Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it’s not. It it’s too vague then the nature of the universe can’t be used in probability arguments. But, if the universe isn’t too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can’t appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can’t be the used to evoke probability considerations. That’s the NP problem. It looks like it’s simply too strong because it provides the basis for an a priori argument that the nature of the universe can’t be used to evoke probability considerations.
Plantinga’s EAAN argues that evolutionary naturalism is self-defeating, i.e., the belief that naturalism (N) & evolution (E) is true defeats itself because E&N imply that probability that we are reliable (R) is low or inscrutable, which in turn provides a defeater to the belief that E&N are true. One of the crucial claims of Plantinga’s argument, if not the most crucial claim, is that the Pr(R/E&N) is low or inscrutable. This means that if evolutionary naturalism is true then the chance that our belief forming mechanisms are reliable, i.e., produce mainly true beliefs, is very low or just can’t be determined. Plantinga’s argument for this claim involves the claim that evolution selects adaptive behavior. So the role of belief in the course of evolution lies in its adaptiveness, not solely in its truth-conditions. So far so good, but consider the problem of intentionality, “Brentano’s problem”. Brentano’s problem is a possibility problem: how is it possible that there are states with intentional contents? For instance a belief that there are cats is an intentional state whose content is “there are cats.” This content is true iff there exist an x such that x is a cat. Cat-facades, dogs that look like cats, tv-cats, raccoons on a dark night don’t make that content true. The content “there are cats” zeroes in on a specific kind of biological organism–cats. Brentano’s problem is very difficult for physicalists. Bill Lycan has a series of papers taking up this challenge again to existing physicalist accounts of intentionality (for starters, see Bill’s paper “Giving Dualism Its Due” AJP, 2009). What does Brentano’s problem have to do with Plantinga’s EAAN? In short, Plantinga’s right that evolutionary naturalism has a problem with true beliefs, but the reason this is a problem is because evolutionary naturalism has a problem with intentional content. One of Plantinga’s examples is that the different beliefs “that is a tree” and “that is a witch-tree” might have the same adaptive behaviors. This is supposed to illustrate the point that false beliefs might be on par with true belief when it comes to adaptive behavior. That’s right as far as it goes. But given that evolutionary naturalism can’t explain intentional content, it’s hard to see how it might throw up a belief that there are witch-trees, let alone throw up the belief that there are trees. I think the Brentano’s problem is fundamental here. To put it contentiously: until we get a solution to Brentano’s problem Plantinga’s EAAN simply is too “down stream” to evaluate. A more agreeable way to put the point is this: Plantinga’s right that evolutionary naturalism is self-defeating but the reason for this is that evolutionary naturalism can’t answer Brentano’s problem.
Michael Tooley’s SEP article on the problem of evil is an excellent and thorough introduction to the problem of evil. In section 3.5 Tooley focuses on the inferential step from (a) there appear to be no goods that justify God in permitting some evils to (b) there are no goods that justify God in permitting some evils. My presentation involves a simplification to Tooley’s discussion and also a change from deontological terminology to axiological terms, but these changes are inessential for the point I want to make. Tooley has an interesting defense of this inference in light of the unknown goods move. He imagines that there might be some unknown good that would justify God in permitting evil. But he then claims that the probability that there’d be some such good is equal to the probability that there’d be some unknown evil. So on the assumption that we are prima facie justified in believing (b) on the basis of (a), Tooley reasons that appeals to unknown goods don’t help. Why? There are four possibilities: (i) the unknown good obtains and no unknown evil obtains; (ii) the unknown good obtains and an unknown evil obtains; (iii) no unknown good obtains and no unknown evil obtains; and (iv) no unknown good obtains and some unknown evil obtains. Tooley then observes that in three of these four cases, the original problematic state of affairs remains impermissible. It’s only in (i) that the original problematic state of affairs is justified. Tooley mentions that all this needs tightening up, but the main intuition is relatively clear. I think there are several lines of response to Tooley’s argument, but I want to consider a smallish point that I think Tooley may agree with. The point is that the unknown goods defense accomplishes this: it lessens the evidential burden on theism by raising the probability of theism. I’ll put the details below the fold.
I was talking to Norman Daniels the other day about healthcare reform and he had some interesting observations about the history of healthcare reform in the US. He remarked that Roosevelt could have easily provided healthcare in a workers protection bill because healthcare at that time was cheap. Evidently, other nations provided universal healthcare earlier on in the development of the healthcare system (with the exception of Canada which began universal healthcare in the 70’s). Norman also mentioned that Ted Kennedy says in his memoirs that he wished he had made a deal with Nixon on healthcare. Evidently, Nixon had a healthcare bill that was much more just than current healthcare bills and Kennedy viewed his failure to work with Nixon as a major mistake. Now, what does this have to do with philosophy of religion? Let’s assume Kennedy’s lapse is a social evil. It is an omission that results in a much worse state of affairs that led to more suffering than would have otherwise resulted. I think there are some interesting features of social evils that aren’t shared by moral evils or natural evils. First, social evils are different from moral evils because the evil that results isn’t a direct result of personal agency. It’s not as if Kennedy’s omission directly caused Joe to be denied healthcare because of a preexisting condition. Second, hindsight can be an important factor in social evils. Kennedy’s lapse provides an interesting case in which he seems to have knowledge of the relevant counterfactuals. If he had worked with Nixon then a much more just healthcare system would have resulted. The presence of knowledge of counterfactuals here seems relevantly different from the case of natural evil. There’s not to my knowledge a discussion of social evil in the POE literature, though I’d be happy to learn otherwise. If the two differences I mentioned survive reflection then it’s possible that reflection on social evil will shed new light on the POE. What do you guys think? I’m particularly interested in whether you think social evil is different in kind from moral evil and natural evil and also whether social evil poses a special problem not already posed by moral and natural evils.
Following up from the previous post, I want to sketch a reply to the GPO that saves the central Plantingian model of theistic belief, the A/C model. The GPO illustrates that Plantinga’s anti-independence strategy (the strategy of arguing that there’s no rationality objection independent of an objection to theism’s truth) is very weak. Many sets of beliefs can adopt this strategy. The mere possibility of a position being able to adopt the anti-independence strategy shouldn’t imply that the position is rational (recall Duhem’s stress on sens bon or good sense in theory choice; see Laudan’s great article on the Quine-Duhem underdetermination thesis). DeRose puts the point this way (see the link he provides in comments on the previous post): The GPO illustrates that sets of beliefs that are irrational turn out to be rational in the many sense of ‘rational’ Plantinga recognizes.
Here’s an attempt to solve this problem: First, drop stress on the anti-independence strategy. The strategy is very weak and seems to assume a principled distinction between rationality considerations and veritic considerations that seems questionable anyway. Second, in order to keep Plantinga’s anti-evidentialist themes–and the Swinburne-Plantinga debate–stress that warranted theistic belief needn’t have positive evidential merit in order to be rational. It suffices for warrant that theistic belief is produced in a way in accord with the A/C model. This yields Plantinga’s famous (or infamous) claim that *if* theism is true then theistic belief is warranted. Third, to handle the GPO stress that theism is epistemically possible–not known to be false–whereas voodoo and flat earthism is not epistemically possible. Epistemic possibility is not independent from truth, so this is different from the anti-independence strategy. It also provides a good opening for evidentialist related considerations. Plantinga can claim–and hints at in the recent preface to God and other Minds–that there is good evidence for theism, but it’s just good enough to make theism epistemically possible. One could combine this with some stress on Pascal’s wager to make theistic ventures rational (but one’s needn’t to that). The resulting position is one in which there’s no GPO, theism is epistemically possible, those in theistic practices are rational (b/c of wager related considerations), and theistic belief is warranted if produced in accord with the A/C model.
Allan Hillman, Kevin Meeker, and I were talking about Plantinga’s reply to the great pumpkin objection yesterday and it seems to hang on this: there is a de jure objection to certain traditions that is independent of a de facto objection. Plantinga says that the de jure objection can’t be sustained against many forms of monontheism, but it can be sustained against voodoo, flat earthism, philosophical naturalism, and humean skepticism. I see the de jure argument against naturalism and skepiticsm but the voodoo and flat earthism examples seem different. Isn’t just that we have overwhelming evidence to think the central claims there are false? I wonder whether you all think of the reply to the great pumpkin objection (or the son of the great pumpkin objection) in these terms and also whether, if that’s right, it’s a sustainable reply (I’ve got my doubts about that).
Several years ago I read Bill Bryson’s book A Short History of Nearly Everything. It’s an entertaining read if you like the popular science genre. Bryson’s book got me wondering about the problem of dead-end species for intelligent design theories. The problem is that dead-end species serve no apparent purpose and so don’t fit well with ID views. If Bryson is right this problem is rather severe. Here’s a quote:
It is a curious fact that on Earth species death is, in the most literal sense, a way of life. No one knows how many species of organisms have existed since life began. Thirty billion is a commonly cited figure, but the number has been put as high as 4,000 billion. Whatever the actual total, 99.99 percent of all species that have ever lived are no longer with us. (p. 342)
What should an ID theorist say about dead-end species?
Several years ago Trent and I were biking in the hills of Missouri thinking about design arguments when one of us (we don’t recall who) said “isn’t it odd that the ID folks stress how hostile the universe is to life while the fine-tuning folks stress how universe is fit for life.” It took us a while to work out this intuition but the result is coming out in Religious Studies. Thanks to all who gave us excellent feedback on earlier drafts of this paper!