Suppose Molinism is true. We know the truth values of some Molinist counterfactuals because we know that their antecedent and consequent are true. But we also have reason to believe many other Molinist counterfactuals. Absent further evidence, if P(A|C) is high, and C is an appropriate antecedent for a Molinist counterfactual C→A, that gives me reason to believe C→A. It certainly gives me reason to believe C→A if I know C is actually true; for if I know C is true, then if P(A|C) is high, P(A) will be fairly high as well, and so A is probably true, and hence C→A is probably true. But I also have reason to think C→A is true in cases where C is false. For instance, if Jones is the sort of person likely to accede to my minor requests, then I have reason to believe that were I to make such-and-such a minor request, he’d accede to it, and I have reason to believe the conditional whether or not I make the request (at least assuming Molinism is true so that the conditional has non-trivial truth-value).
This suggests that if the objective probability of A on C is high, then the objective probability of C→A is also high. So the Molinist conditional C→A, assuming it’s true, doesn’t seem to be a mere brute fact. It is a fact subject to meaningful probabilistic assignments. But if it’s not a mere brute fact, it seems reasonable to look for an explanation of it. What is that explanation?
Well, maybe we have a probabilistic explanation. Maybe the fact that C makes A probable explains why C→A. But this is weird. It seems that probabilistic explanation is a species of causal explanation (with probabilistic causation). But there is surely no causal explanation of why C→A, at least in worlds where C is not true. (What would the cause be? The truthmaker of C? But C is not true and has no truthmaker.)
I’ll leave it as a puzzle: How is a Molinist to explain the connection between P(A|C) and the probability of the conditional C→A?
It’s hard to come up with reasonable priors for such theses as Naturalism and Theism and with reasonable conditional probabilities for such evidence as Evils We Can’t Theodicize on Theism. But we can sometimes come up with reasonable comparisons of the strength of evidence. And this might lead to some helpful non-numerical probabilistic reasoning.
For instance, we might have the judgment that the evidential strength of the Problem of Evil (POE) as an argument against theism is no greater than the evidential strength of the Finetuning Argument (FTA) as an argument for theism. Two thoughts in support of this: (1) the low-entropy initial state of the our universe has been estimated by Penrose to be utterly incredibly unlikely (my paraphrase of his 10^(-10^123)) and some of the other anthropic coincidences come with what are intuitively extremely narrow ranges; the theist has proposed various theodicies–they may not be convincing, but it seems reasonable to say that the probability that together they answer the POE is no less, indeed quite a bit greater, than the incredibly tiny probabilities that FTA claims; (2) just as thinking about naturalistic multiverse hypotheses significantly decreases the force of FTA, thinking about theistic multiverse hypotheses significantly decreases the force of POE (cf. Turner and Kraay’s work); (3) just as in the case of FTA we might worry that there is some nomic explanation of the coincidences that we haven’t found, so too in the case of POE we have sceptical theism.
This means that the theist can simply sacrifice FTA to POE: the FTA either balances POE or outbalances POE (I think the latter, because of point (1) above).
Then the theist has a nice supply of other strong and serious theistic arguments, such as the cosmological, non-FTA design arguments (e.g., Swinburne’s laws of nature argument), ontological, religious experience, moral epistemology (theism has a much better explanation than naturalism of how we can know objective moral truths), etc. The atheist has a few other arguments, too, but I think they are not very impressive (the Stone and other issues for the Chisholming of divine attributes, Grim-style worries about omniscience and infinity, worries about the interaction between the physical and nonphysical). At least once POE is completely out of the picture, even if FTA is lost, the theist can make a very strong case.
Call an objection to the existence of God a ‘problem of sub-optimal worlds’ when it appeals to the claim that God has reason to maximize the value of worlds. Since there are better (feasible) worlds than the actual one, these problems infer in some way or another that God does not exist. Although I haven’t looked at the literature closely, my impression is that every instance of the problem of sub-optimal worlds assumes a very simple relationship between the intrinsic value of a state of affairs and reasons for action (they don’t always talk about reasons, but my point could be cast in terms of virtues or whatever else one prefers). Something like this is typically assumed without comment:
Promotion: For every domain of intrinsic value D and subject S, S has reason to maximize D, i.e., for every additional degree of D that could be attained, S has reason to attain that additional degree of value.
Promotion is plausible for some domains of intrinsic value, such as welfare. It’s plausible that, for every additional degree of welfare that I could bring about in your life, I have some reason to take the necessary means of attaining that additional degree of welfare. But does Promotion hold for every domain of intrinsic value? I don’t think so.
The following argument parallels the Slingshot Argument expressed here.
(P1) If X is explicable (can be explained), and X is logically equivalent to Y, then Y is explicable (for any X and Y).
(P2) If X is explicable, and X is semantically equivalent to Y, then Y is explicable (for any X and Y), where semantically equivalent facts are ones that are expressible by syntactically identical sentences whose referring terms refer to the same thing: for example, ‘the fact that Socrates is happy’ is semantically identical to ‘the fact that the person who is identical to Socrates is happy’, since ‘the person who is identical to Socrates’ refers to the same thing as ‘Socrates’ (and everything else about the sentences are the same).
[Edit. P2 may be more plausible if stated this way: If X is explicable, and X and Y both say the same thing about the same things (and say nothing else about any other things), then Y is explicable (for any X and Y). Then the deduction below will need a premise to the effect that the cumbersome facts considered there ultimately say the same thing about the same things (and say nothing else about any other things)*.]
(P3) At least one fact is explicable.
From (P1) – (P3), we deduce the principle that every fact has an explanation has follows. First, there is an explicable fact F (by P3). Second, F is logically equivalent to this fact Q: the x, such that [x is identical to Socrates, and F obtains] is identical to the x, such that [x is identical to Socrates]. Third, for any fact, F*, Q is semantically equivalent to this fact R: the x, such that [x is identical to Socrates, and F* obtains] is identical to the x, such that [x is identical to Socrates]. This is because where F and F* are both facts (and so both obtain), ‘the x, such that [x is identical to Socrates, and F obtains]’ and ‘the x, such that [x is identical to Socrates, and F* obtains]’ both refer to one and the same thing: Socrates (if they refer at all). Finally, R is logically equivalent to F*. It then follows from P1 and P2 that F* is explicable (for any F*), since F is explicable. Therefore, every fact is explicable. To get to PSR, we add the premise that if a fact f has no explanation, then the fact that (f obtains and f has no explanation) is inexplicable.
Replies to the Slingshot Argument may (or may not) carry over.
Let ‘PSR’ stand for the principle that whatever is, but need not be, has an explanation for its being.
(PSR) Whatever obtains, but doesn’t obtain of necessity, has an explanation for its obtaining.
Equivalently: Every contingent state of affairs has an explanation.
One might think that PSR has both a priori and empirical support. Regarding the a priori, when we consider an arbitrary state of affairs that obtains but doesn’t have to obtain, we feel motivated to wonder why it obtains; and that wonder seems to reveal an inclination in us to think there ought to be an explanation.
As for empirical support, PSR is a simple (the simplest?) explanation of all the cases of explanation anyone has encountered.
The support is defeated, however, if there are counter-examples to PSR. And, my sense is that most philosophers these days think or suspect or worry that there are counter-examples.
Perhaps the most commonly cited counter-examples are these: (1) quantum events, and (2) the Biggest Contingent Fact. It turns out to be difficult, however, to get these counter-examples to stick, as I’ll attempt to explain. I’ll focus more on (2), since I take it to be the more serious candidate.
We presuppose something like the Principle of Sufficient Reason (PSR) in daily life and science. So there is very good reason to accept something like PSR. But suppose you don’t want to accept PSR, maybe because you think it implies the existence of God or maybe because you just think it has counterexamples. What can you do? Here is an option:
- The probability that a particular ordinary event, like the coming into existence of a brick or the death of a person, occurs without an explanation is non-zero but very low.
Here are some problems for this. Consider an infinite series of possible events: a brick of weight 2.5kg coming into existence in front of me now, a brick of weight 2.25kg coming into existence in front of me now, a brick of weight 2.125kg coming into existence in front of me now, …. By (1), each of these is very unlikely to happen without an explanation, but there is a non-zero probability for each. Moreover, plausibly, these non-zero probabilities are approximately the same.[note 1] So, we have an infinite number of possible events, each of which has approximately the same non-zero probability. Barring some further dependence story, we should conclude that very likely at least one of these events will happen. But none of these events in fact happened. Repeat the argument with mugs, rocks, etc. None of the analogues there happened. The theory, thus, stands refuted.
If we grant that two bricks can’t come into existence in the same place at the same time, the argument can be made stronger. Specify in each event the same location L for the brick. Then we have an infinite number of mutually exclusive events, each of which has approximately the same non-zero probability. And that not only is contrary to observation, but violates the conjunction of the total probability axiom and the finite additivity of probabilities (at least on the right understanding of “approximately the same” that ensures that if an infinite sequence of positive numbers is “approximately the same”, their mutual ratios are all moderately close to 1, say between 0.5 and 2).
Define moral realism as the claim there are objective moral truths and that we know some of them. Consider the argument:
- If no religious beliefs are true, moral realism is false.
- Moral realism is true.
- So, some religious beliefs are true.
I won’t argue for (2), but only for (1). For my argument I will assume a form of reliabilism. (I think the arguments here work well to establish (1) on non-reliabilist epistemologies. The present argument plugs the weakness of that argument.)
Here is the line of thought. Start with this plausible observation:
- If no religious beliefs are true, the correct explanation of our moral beliefs is that moral beliefs were beliefs about unobservable realities that evolved to help prevent defection in prisoner’s dilemmas in cognitively sophisticated hominids.
Now, if reliabilism is true, the question is whether the process, P, of evolutionarily forming beliefs about unobservable realities to help prevent defection in prisoner’s dilemmas is reliable: is likely to produce true belief. But now observe that if no religious beliefs are true, very likely our religious beliefs also arose out of P . Positing supernatural judges who can see if one is sneakily defecting in prisoner’s dilemmas is obviously quite helpful. Thus, we have two families of beliefs produced by P: moral and religious. If the religious ones are all false, the process is unreliable. If the process is unreliable, then its outputs are not knowledge. And so if no religious beliefs are true, we have no moral knowledge, and hence moral realism is false.
Of course, we have the usual tricky thing with reliabilism: What is the relevant level of description of the belief-forming process? Is it: “evolutionarily forming beliefs about unobservable realities to help prevent defection in prisoner’s dilemmas”, or is it something narrower that is special to the moral case, and not present in the religious case? I think it would be difficult, however, to formulate a description narrowed to the moral case without being completely ad hoc.
Let Contingentism be the thesis that no concrete thing must exist. Define ‘concrete thing’ as anything that can cause something, or leave it as primitive. (Side note: Contingentism is hotly debated among philosophers of religion. But surely it is a thesis of metaphysics; so why aren’t metaphysicians debating this?)
Arguments against Contingentism typically take the following form:
1. Every fact of type T has an explanation (else: is explicable)
2. If Contingentism is true, then there is a fact of type T that has no explanation (else: is not explicable)
3. Contingentism is not true.
Committed Contingentists usually either end up denying the principle of explanation employed by (1) or withholding judgment. After all, such explanatory principles tend to be very far-reaching.
But here’s another strategy. We count costs. Rather than searching for sound philosophical arguments for/against Contignentism, we identify costs and benefits of Contingentism. That may be a lot easier. And it can help us make progress without having to make converts: for a committed contingentist can, in principle, come to agree that there are certain costs of Contingentism.
I’m going to propose one cost–to get this strategy started. (I do not claim this is the most serious cost, or that there aren’t counter-costs that ultimately outweigh it.)
Consider Rowe’s argument, which is essentially:
- E is an evil for which we have been unable to find a justifier despite serious investigation.
- Therefore, probably, E has no justifier.
- If some evil has no justifier, then theism is false.
- Therefore, probably, theism is false.
And then consider this anti-evolutionary argument:
- F is a major inheritable feature of an organism for which we have been unable to find an evolutionary explanation despite serious investigation.
- Therefore, probably, F has no evolutionary explanation.
- If some major inheritable feature of an organism has no evolutionary explanation, then evolutionary universalism is false.
- Therefore, probably, evolutionary universalism is false.
Here, evolutionary universalism is the claim that all major inheritable features of organisms have their presence explained by means of evolutionary explanations. (There are many ways of spelling out “major” that still leaves (5) plausible in some cases.)
It is an interesting sociological fact that many atheists think 1-4 is a good argument and 5-8 is a bad one, and that many creationists and intelligent design advocates think 5-8 is a good argument and 1-4 is a bad one.
But I think both are bad.