When one’s book in sexual ethics is coming out (shameless self-promotion), one’s thoughts naturally turn to the philosophy of science. 🙂 A standard line of thought is that naturalism is a simpler theory than theism in that it only posits one kind of entity, the natural world, while theism posits that and God.
A standard theistic response is to concede the point but say that theism wins out through greater explanatory power. Trent and I have, however, been exploring a different line of thought: One measures the simplicity of a theory (with “simplicity” understood in such a way that it is an intellectual merit of a theory that it be simple) primarily by looking at the simplicity of the theory’s explanatorily fundamental posits (this has some structural resemblance to Huemer’s work) rather than at claims explained by the theory.
For instance, suppose that according to our best physics certain laboratory conditions not occurrent in nature produce a Zeta particle. Alien scientists, who are the only ones ever to have the technology for this, are facing a great natural disaster they cannot avert that will destroy their civilization. As one last hurrah for science, they plan to produce a Zeta before the disaster. Unfortunately, at the last minute, they find that an extremely expensive part, which there is no time to repair, has only probability 1/2 of functioning.
Consider the theories: (S) They will succeed in producing a Zeta due to the part functioning and (F) They will fail in producing a Zeta due to the part malfunctioning. Theory S posits the instantiation of a new kind of particle that F does not. If explained phenomena also count towards the complexity of a theory, S is more complex. But that just seems wrong: S and F are on par simplicity-wise. Besides, if S were more complex than F, then if all other intellectual merits are equal–which they sure seem to be–then we should take S to be more likely than F. But that would violate what seems an unproblematic instance of the Principal Principle–F and S should have the same probability.
It’s hard to come up with reasonable priors for such theses as Naturalism and Theism and with reasonable conditional probabilities for such evidence as Evils We Can’t Theodicize on Theism. But we can sometimes come up with reasonable comparisons of the strength of evidence. And this might lead to some helpful non-numerical probabilistic reasoning.
For instance, we might have the judgment that the evidential strength of the Problem of Evil (POE) as an argument against theism is no greater than the evidential strength of the Finetuning Argument (FTA) as an argument for theism. Two thoughts in support of this: (1) the low-entropy initial state of the our universe has been estimated by Penrose to be utterly incredibly unlikely (my paraphrase of his 10^(-10^123)) and some of the other anthropic coincidences come with what are intuitively extremely narrow ranges; the theist has proposed various theodicies–they may not be convincing, but it seems reasonable to say that the probability that together they answer the POE is no less, indeed quite a bit greater, than the incredibly tiny probabilities that FTA claims; (2) just as thinking about naturalistic multiverse hypotheses significantly decreases the force of FTA, thinking about theistic multiverse hypotheses significantly decreases the force of POE (cf. Turner and Kraay’s work); (3) just as in the case of FTA we might worry that there is some nomic explanation of the coincidences that we haven’t found, so too in the case of POE we have sceptical theism.
This means that the theist can simply sacrifice FTA to POE: the FTA either balances POE or outbalances POE (I think the latter, because of point (1) above).
Then the theist has a nice supply of other strong and serious theistic arguments, such as the cosmological, non-FTA design arguments (e.g., Swinburne’s laws of nature argument), ontological, religious experience, moral epistemology (theism has a much better explanation than naturalism of how we can know objective moral truths), etc. The atheist has a few other arguments, too, but I think they are not very impressive (the Stone and other issues for the Chisholming of divine attributes, Grim-style worries about omniscience and infinity, worries about the interaction between the physical and nonphysical). At least once POE is completely out of the picture, even if FTA is lost, the theist can make a very strong case.
Standard sceptical theism focuses on our ignorance of the realm of values. I want to suggest a different kind of sceptical response to an evil E. This response identifies a good G such that it is clear that the occurrence of a good relevantly like G logically requires the permission of an evil relevantly like E, but instead the scepticism is in that we have on balance no significant evidence against the conjunction:
- G obtains and
- G outweighs E and
- there is no alternative good G* dissimilar from G that doesn’t require anything nearly as bad as E and that would be more or approximately equally worth having.
If the triple conjunction holds then G justifies E, and so if we have no significant evidence against the triple conjunction, we have no significant evidence that E is unjustified. (Yeah, one can dispute my implicit transfer principle, but something like that should work.)
And it’s fairly easy to generate examples of G that do the job for particular E. Take Rowe’s case of the horrendous evil inflicted on Sue. Let G be Sue’s having forgiven E’s perpetrator. We have no significant evidence against the conjunction (1)-(3), then. Granted, we may have significant evidence that G did not obtain in this life, though even that is probably a stretch, but we have no balance no significant evidence that G didn’t obtain in an afterlife. My intuitions strongly favor (2)–there is a way in which forgiveness seems to defeat evil–but in any case we have no significant evidence against (2). As for (3), granted there are many great moral goods that don’t require anything nearly as bad as E, but I don’t think we have on balance significant evidence that these goods are roughly as good as or better than G. Now, of course, it can be the case (whether due to a logical contradiction or dwindling probabilities) that we don’t have significant evidence against any conjunct but we do have significant evidence against the conjunction. But I don’t think this happens here.
We presuppose something like the Principle of Sufficient Reason (PSR) in daily life and science. So there is very good reason to accept something like PSR. But suppose you don’t want to accept PSR, maybe because you think it implies the existence of God or maybe because you just think it has counterexamples. What can you do? Here is an option:
- The probability that a particular ordinary event, like the coming into existence of a brick or the death of a person, occurs without an explanation is non-zero but very low.
Here are some problems for this. Consider an infinite series of possible events: a brick of weight 2.5kg coming into existence in front of me now, a brick of weight 2.25kg coming into existence in front of me now, a brick of weight 2.125kg coming into existence in front of me now, …. By (1), each of these is very unlikely to happen without an explanation, but there is a non-zero probability for each. Moreover, plausibly, these non-zero probabilities are approximately the same.[note 1] So, we have an infinite number of possible events, each of which has approximately the same non-zero probability. Barring some further dependence story, we should conclude that very likely at least one of these events will happen. But none of these events in fact happened. Repeat the argument with mugs, rocks, etc. None of the analogues there happened. The theory, thus, stands refuted.
If we grant that two bricks can’t come into existence in the same place at the same time, the argument can be made stronger. Specify in each event the same location L for the brick. Then we have an infinite number of mutually exclusive events, each of which has approximately the same non-zero probability. And that not only is contrary to observation, but violates the conjunction of the total probability axiom and the finite additivity of probabilities (at least on the right understanding of “approximately the same” that ensures that if an infinite sequence of positive numbers is “approximately the same”, their mutual ratios are all moderately close to 1, say between 0.5 and 2).
I want to give this argument in part to provoke a bit of discussion of the role of FOL in philosophy. I don’t think the argument carries great weight, in large part because of Objection 2 (see the end).
1. (Premise) The inferences allowed by classical First Order Logic (FOL) combined with a modal logic that includes Necessitation are valid.
2. (Premise) If every being is contingent, then possibly nothing exists. (A material conditional)
3. Necessarily something exists. (By 1)
4. So, there is a necessary being. (By 2 and 3)
The proof of (3) is as follows. Classical logic allows (Ex)(x=x) to be inferred from (x)(x=x). Since (x)(x=x) is a theorem, so is (Ex)(x=x), and hence by the rule of Necessitation, we have: Necessarily (Ex)(x=x). And thus (3) follows. And of course Necessitation is a part of standard modal systems like M, S4 and S5.
I think (2) is intuitively plausible. Here is one way to try to argue for it:
5. (Premise for reductio) Premise (2) is false.
6. (Premise) The non-existence of non-unicorns does not necessitate the existence of unicorns.
7. Every being is contingent and it is necessary that at least one thing exists. (By 5)
8. Necessarily, if no non-unicorns exist, then at least one thing exists. (By 7)
9. Necessarily, if no non-unicorns exist, then at least one unicorn exists. (By 8)
Since (9) contradicts (6), our reductio argument for premise (2) is complete.
(I am grateful to Josh Rasmussen for simplifying my original argument.)
I remember encountering as an undergrad the notion (Mackie?) that moral properties were “queer.” Then I remember reading some stuff in Phil Mind about “ectoplasm” and “spook stuff” with attributions of mental substance as “spooky.” I don’t know where this nonsense got started, but I was surprised “real” philosophers would play this kind of card. It is nothing less than a cop out. I once asked a famous atheist why he didn’t believe in God, and he said because it was just “weird” and compared it to belief in numbers. Not acceptable. We’re stuck with the weird. Peter van Inwagen is eloquent on this: that we face a choice among mysteries, not a choice between mystery and something else (actually I said that, but he inspired me to say it).
Here at the University of Saint Thomas Summer Seminar, (what a beautiful campus!), we’ve just completed our first week, the topic of which was the Fine Tuning argument for God’s existence. There were a lot of great presentations and comments pro and con, but I find myself mostly a Swinburne guy here. So I wrote a note to my colleagues here giving a bare-bones summary of his perspective. It is below the fold as a basis of further discussion or just for the record.
The goal of this post is to layout various skeptical theistic theses. Skeptical theism is the position that we should be leery of our ability to limn the limits of God’s reasons for permitting some cases of horrendous evils. Bergmann casts skeptical theism as responding to Rowe’s noseeum inference: (P) No good we know of justifies God in permitting E1 and E2 (the bambi and sue cases) to (Q) No good at all justifies God in permitting E1 and E2. From (Q) one deduces that there’s no God (~G).
Here’s a list of various skeptical theistic theses in the order of strongest to weakest.
First group: evidential irrelevance
1. Necessarily, for any evil, P(G|e)=P(G).
(1) claims that necessarily evil is evidentially irrelevant to the existence of God. (1) is clearly subject to counterexample: let e be a trillion sentient creatures suffer endless torment. Skeptical theism need not be committed to denying that this would be evidence against theism.
2. For any evil, necessarily, P(G|e)=P(G).
(2) restricts the evils to evils that occur in the actual world and claims that they are such that necessarily they are evidentially irrelevant to the existence of God.
3. For any evil, P(G|e)=P(G).
(3) drops the embedded necessity operator. Depending on how one understands the nature of the P function and the nature of evil, 2 and 3 could be equivalent.
4. For E1 and E2 (and similar evils), P(G|E1&E2)=P(G).
This further restricts the evils in our world to those like the Bambi and Sue cases. One advantage of this restriction is that it allows skeptical theism to be viewed as a special case defense and not a general strategy defense. (Also, we can add back in the necessity operator to get further distinctions here).
Second group: Relevance but not significance (my gloss: “evil isn’t a game changer”)
An evil is a game-changer if it can tip the balance of evidence in favor of atheism or agnosticism. A no-game changer version of skeptical theism says that while evil can detract from the probability of God it can’t be the proverbial straw that brought the camel’s back. I shall represent the ‘no-game changer thesis’ by using ‘â’. This represents that the probabilities are closely similar.
5. Necessary, for any evil, P(G|e)âP(G).
6. For any evil, necessarily, P(G|e)âP(G).
7. For any evil, P(G|e)âP(G).
8. For E1 and E2 (and similar evils), P(G|e)âP(G).
To undermine Rowe’s inference all Bergmann and company need is 8. Thus, the skeptical theist can easily recognize that there are many evils that could occur that would significantly distract from the probability of theism. Also, 8 is interesting because it allows skeptical theism to be viewed as a special case defense that can be run along with other defenses–the free will defense, the soul-making defense, the value of natural laws, etc.
This post doesn’t come out of extensive research but just a wondering about petitionary prayer. Consider the following two scenarios:
1) When Percy learns of his wife Sally’s sickness, he says a prayer for her. However, when he hears from the doctor that this sickness is life-threatening, he calls his relatives and church community, asking them to also pray for her healing.
2) Hermione says a prayer for her friend’s well-being. However, Hermione desires more than anything else for her daughter’s well-being in college. She goes to bed every night, asking God for this.
From what I know, many religious communities find the actions in (1) and (2) to be commonplace, normal, and even rational. (We see an analogy to persistent prayer in Jesus’ parable of the woman asking the judge for justice, and we see communal prayer all throughout Acts and the epistles.) But I wonder why, exactly, more petitionary prayers are supposed to be helpful. Here are some possibilities:
Aficionados of the fine-tuning argument will be familiar with the normalizability problem presented by the McGrews and Vestrup in their (2001) Mind article. The normalizability problem is that one cannot make sense of probabilities within an infinite space of possibilities in which each possibility is equi-probable. Suppose, for illustration, that there is a lottery on the natural numbers. For each natural number it’s possible that it wins but no natural number has any greater chance of winning than any other natural number. If we assign each natural number some very small finite chance of winning then the total space of possibilities sums to a probability greater than 1 (which is to say that the space of possibilities isn’t normalizable). Because talk of probabilities makes sense only if the total outcome space equals 1 then we can’t make sense of probabilities in this case. One move here is to deny countable additivity, the claim that you can sum probabilities in an infinite space. Another move is to introduce infinitesimals to recover a positive probability without denying countable additivity. Yet another move is to hold that the space of possibilities is uneven in terms of probabilities. The basic idea is that the probabilities in an infinite range is curved and not a straight line. I don’t want to talk about any of these moves. Instead I want to focus on a curious result that arises from the normalizability problem. Hence the title of the post: the Normalizality Problem problem (or, the NP problem). To begin let’s step back a bit and ask why the fine-tuning argument is a fairly recent newcomer among the catalog of arguments for theism. The basic idea is that prior to the scientific developments in the beginning of the 20th century the universe as a whole was conceived to be too vague to present any arguments from its nature. One couldn’t sensibly talk about specific properties of the universe as a whole and thus there was no sense to be made of the universe as a whole being fine-tuned. But all that changed with the discovery that the universe was expanding and that the initial state of the universe must have had very specific properties and was governed by specific laws with various mathematical constants. Thus, it became appropriate to consider why the initial conditions of the universe and the constants of the laws had the specific values it in fact had. These developments gave rise to the fine-tuning argument and also lots of concern among physicists to find a more fundamental theory to remove some of the improbability that just this ensemble of conditions and constants occurs. In short, it looks like there’s a significant change in our epistemic position vis-Ã -vis the nature of the universe as a whole. Prior to the scientific developments in the 20th century the nature of the universe as a whole was too vague to give rise to probability intuitions, but after those developments it was.
Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it’s not. It it’s too vague then the nature of the universe can’t be used in probability arguments. But, if the universe isn’t too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can’t appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can’t be the used to evoke probability considerations. That’s the NP problem. It looks like it’s simply too strong because it provides the basis for an a priori argument that the nature of the universe can’t be used to evoke probability considerations.