Today’s colloquium paper is “An Empirical Argument for Substance Dualism” by Perry Hendricks. Hendricks is a graduate student in philosophy at Trinity Western University in British Columbia, where he also received his BA. His interests include philosophy of mind, philosophy of religion, and epistemology.
An Empirical Argument for Substance Dualism
Perry Hendricks
A common problem with arguments for dualism is that they rely on modal premises that are only supported by dubious intuitions. This results in the arguments having a narrow scope—only those who already hold the needed intuitions will find them to be convincing. In this paper, I try to remedy this situation by constructing a new modal argument whose key premise is empirically supported. I begin by formulating the physicalist thesis and make clear its commitments. Next I explicate the notions of reduction and substance. After this, I argue that Twin Earth—a physical duplicate of Earth (including its history and its inhabitants) is possible and that this possibility is empirically supported. I finish by showing that the possibility of Twin Earth entails that selves cannot be reduced and are not supervenient, and this entails that they are non-physical. Further, since selves are substances, it follows that substance dualism is true.
It is not uncommon to hear the argument that if there is an afterlife, then dualism must be true. However, dualism is false, and hence there is not an afterlife. It is also not uncommon to hear the argument that if dualism is true, then the probability of theism rises. I find neither of these theses compelling—I think that physicalism is compatible with an afterlife and that dualism does not raise the probability of theism—but if my argument is correct, it will provide a way to circumvent the first argument while providing support for the crucial premise of the second (i.e. that dualism is true). However, my argument will bring out a new challenge for theism: if the argument that I defend here is successful, then it follows that God acted arbitrarily in actualizing me over another self (or person). This is because multiple selves could have served the causal role that I do. But then why pick me over someone else? What could possibly ground this choice?
In its barest form, my argument is that physicalism entails that everything that exists is at least minimally supervenient, but selves are not minimally supervenient. Hence physicalism is false. Further, since selves are not minimally supervenient, it follows that they are non-physical. To show that selves are not minimally supervenient, I argue that they cannot be functionally reduced because it is possible for multiple selves to play the same causal role in the world.
One objection that I have pondering recently is that Twin Perry and I do not have identical causal roles because of our differing spatial locations. That is, Twin Perry’s causal role is (slightly) different than mine because he is causally related to Earth in a way that I am not, and I am causally related to Twin Earth in a way that he is not. While I’m not convinced that this difficulty is insurmountable (it is not clear to me that these differences are relevant given my definition of the self), we could tweak the argument to get around this objection as follows. First, note that Twin Earth and Twin Perry are possible. Second, note that this entails that Twin Perry can cause the same actions as I do—Twin Perry and I have overlapping causal powers. Lastly, note that this entails that my causal role does not point only to me, for Twin Perry could cause the same actions—play the same causal role—as I do. Hence Twin Perry and I may be inverted, and the objection mentioned above is rendered irrelevant.
The complete paper is here. Discussion welcome below!
One objection to some solutions to the problem of evil, particularly to sceptical theism, is that if there are such great goods that flow from evils, then we shouldn’t prevent evils. But consider the following parable.
I am an air traffic controller and I see two airplanes that will collide unless they are warned. I also see our odd security guard, Jane, standing around and looking at my instruments. Jane is super-smart and very knowledgeable, to the point that I’ve concluded long ago that she is in fact all-knowing. A number of interactions have driven me to concede that she is morally perfect. Finally, she is armed and muscular so she can take over the air traffic control station on a moment’s notice.
Now suppose that I reason as follows:
- If I don’t do anything, then either Jane will step in, take over the controls and prevent the crash, or she won’t. If she does, all is well. If she doesn’t, that’ll be because in her wisdom she sees that the crash works out for the better in the long run. So, either way, I don’t have good reason to prevent the crash.
This is fallacious as it assumes that Jane is thinking of only one factor, the crash and its consequences. But the mystical security guard, being morally perfect, is also thinking of me. Here are three relevant factors:
- C: the value of the crash
- J: the value of my doing my job
- p: the probability that I will warn the pilots if Jane doesn’t step in.
Here, J>0. If Jane foresees that the crash will lead to on balance goods in the long run, then C>0; if common sense is right, then C<0. Based on these three factors, Jane may be calculating as follows:
- Expected value of non-intervention: pJ+(1−p)C
- Expected value of intervention: 0 (no crash and I don’t do my job).
Let’s suppose that common sense is right and C<0. Will Jane intervene? Not necessarily. If p is sufficiently close to 1, then pJ+(1−p)C>0 even if C is a very large negative number. So I cannot infer that if C<0, or even if C<<0, then Jane will intervene. She might just have a lot of confidence in me.
Suppose now that I don’t warn the pilots, and Jane doesn’t either, and so there is a crash. Can I conclude that I did the right thing? After all, Jane did the right thing—she is morally perfect—and I did the same thing as Jane, so surely I did the right thing. Not so. For Jane’s decision not to intervene may be based on the fact that her intervention would prevent me from doing my job, while my own intervention would do no such thing.
Can I conclude that I was mistaken in thinking Jane to be as smart, as powerful or as good as I thought she was? Not necessarily. We live in a chaotic world. If a butterfly’s wings can lead to an earthquake a thousand years down the road, think what an airplane crash could do! And Jane would take that sort of thing into account. One possibility was that Jane saw that it was on balance better for the crash to happen, i.e., C>0. But another possibility is that she saw that C<0, but that it wasn’t so negative as to make pJ+(1−p)C come out negative.
Objection: If Jane really is all-knowing, her decision whether to intervene will be based not on probabilities but on certainties. She will know for sure whether I will warn the pilots or not.
Response: This is complicated, but what would be required to circumvent the need for probabilistic reasoning would be not mere knowledge of the future, but knowledge of conditionals of free will that say what I would freely do if she did not intervene. And even an all-knowing being wouldn’t know those, because there aren’t any true non-trivial such conditionals.
Suppose that we’ve observed a dozen randomly chosen ravens and they’re all black. We (cautiously) make the obvious inference that all ravens are black. But then we find out that regardless of parental color, newly conceived raven embryos have a 50% chance of being black and a 50% chance of being white, and that they have equal life expectancy in the two cases. When we find this out, we thereby also find out that it was just a fluke that our dozen ravens were all black. Thus, finding out that it’s random with probability 1/2 that a given raven will be black defeats the obvious inference that all ravens are black, and even defeats the inference that the next raven we will see will be black. The probability that the next raven we observe will be black is 1/2.
Next, suppose that instead of finding out about probabilities, we find out that there is no propensity either way of a conception resulting in a black raven or its resulting in a white raven. Perhaps an alien uniformly randomly tosses a perfectly sharp dart at a target, and makes a new raven be black whenever the dart lands in a maximally nonmeasurable subset S of the target and makes the raven be white if it lands outside S. (A subset S of a probability space Ω is maximally nonmeasurable provided that every measurable subset of S has probability zero and every measurable superset of S has probability one.) This is just as much a defeater as finding out that the event was random with probability 1/2. (The results of this paper are driving my intuitions here.) It’s still just a fluke that the dozen ravens we observed were all black. We still have a defeater for the claim that all ravens are black, or even that the next raven is black.
Finally, suppose instead that we find out that ravens come into existence with no cause, for no reason, stochastic or otherwise, and their colors are likewise brute and unexplained. This surely is just as good a defeater for inferences about the colors of ravens. It’s just a fluke that all the ones we saw so far were black.
Now suppose that the initial state of the universe is a brute fact, something with no explanation, stochastic or otherwise. We have (indirect) observations of a portion of that initial state: for instance, we find the portion of the state that has evolved into the observed parts of the universe to have had very low entropy. And science appropriately makes inferences from the portions of the initial state that have been observed by us to the portions that have not been observed, and even to the portions that are not observable. Thus, it is widely accepted that the whole of the initial state had very low entropy, not just the portion of it that has formed the basis of our observations. But if the initial state and all of its features are brute facts, then this bruteness is a defeater for inductive inferences from the observed to the unobserved portions of the initial state.
So some cosmological inductive inferences require that the initial state of the universe not be entirely brute. I don’t know just how much cosmology depends on the initial state not being entirely brute, but I suspect quite a bit.
What if there is no initial state? What if instead there is an infinite regress? Here I am more tentative, but I suspect that the same problem comes back when one considers the boundary conditions, say at time negative infinity. If these boundary conditions are brute, then we’ve got the same problem as with a brute initial state. Likewise, a contingent first cause will not help, either, since the argument can be applied to its state.
It seems that the only way out of scepticism about cosmology is if there is a necessary first cause. And I also suspect that the impact of the argument may go beyond cosmology. Presumably, we continue to come into causal contact with portions of the initial state that we have previously not been in contact with, and couldn’t that affect us in all sorts of ways that undermine more ordinary inductive inferences (e.g., a burst of radiation might kill us all tomorrow, and no probabilities can be assigned to the burst, and hence no probabilities can be assigned to any positive facts about what we will do tomorrow)? If so, then we lose quite a bit of our predictive ability about the future if we hold the initial state to be brute.
When someone asks ‘why p rather than q?’, it is sometimes a good answer to say, ‘p is far more probable than q.’ When someone asks, ‘why is p more probable than q?’, it is sometimes a good answer to say, ‘there are many more ways for p to be true than for q to be true.’ According to a well-known paper by Peter Van Inwagen, the question ‘why is there something rather than nothing?’ can be answered in just this fashion: something is far more probable than nothing, because there are infinitely many ways for there to be something, but there is only one way for there to be nothing. In his contribution to The Puzzle of Existence, Matthew Kotzen argues that, this sort of answer is only sometimes a good one, and that we cannot know a priori whether it is a good answer to the question of something rather than nothing.
Kotzen’s general line of response is a standard one: he argues that there are many possible measures, and not all of them assign probability 0 to the empty world. Van Inwagen is perfectly aware of this problem, but argues that a priori considerations allow us to select a natural measure. Kotzen’s strategy is to identify some everyday examples where this pattern of explanation looks good, and some where it looks bad, and show that van Inwagen’s a priori considerations don’t draw the line between good and bad in the right place. Furthermore, he argues (p. 228) that van Inwagen’s considerations may not actually be sufficient to assign unique probabilities in the relevant cases, since it is not always clear what space the measure should be assigned over.
I think Kotzen’s argument against van Inwagen is quite compelling. The best thing about Kotzen’s article, though, is that it does a great job explaining these complex issues at a moderate level of rigor and detail without assuming hardly any background. This would be a great article to assign to undergraduate students.
In the rest of this post, I’m going to do two things. First, I’m going to explain the issue about measures in a much lesser level of rigor and detail than Kotzen does, just to make sure we are all up to speed. Second, I am going to raise the question of whether van Inwagen’s argument might have an even bigger problem: whether, instead of too many equally eligible measures, there might be none.
The simplest, most familiar, cases where the probabilistic pattern of explanation with which we are concerned works are finite and discrete. This is the case, for instance, with dice rolls or coin flips. The coin either comes up heads or tails; each die shows one of its six faces. So then, as one learns in one’s very first introduction to probabilities, in the case of the dice roll, the probability of any particular proposition about that dice roll is the number of cases in which the proposition is true divided by the total number of possible cases (for two six-sided dice, 36). In real life, by dividing the outcomes into discrete cases like this, we care about certain factors (which face is up) and not about others (e.g., where on the table the dice land). This division into discrete cases is called a partition. The reason the probabilities are so simple in the dice case, with each case in the partition being equally likely, is because we chose a good partition. (Well, actually, it’s because a fair die is defined as one that makes each of those outcomes equally probable, but let’s ignore that for now and imagine that fair dice just occur in nature rather than being made by humans on purpose.) Suppose that, on one of our dice, the face with six dots is painted red rather than white and, for some reason, what we really care about is whether the red face is up. Well then we might partition the outcomes accordingly, into the red outcome and the non-red outcomes. But these two cases (red and non-red) are not equally probable.
Sometimes the thing we care about is not a discrete case like this, but a fundamentally continuous case like (in a standard example) where on a dartboard a perfectly thin dart lands. A measure is basically the equivalent, in this continuous case, of the partition in the discrete case. For the dart board, there is a natural measure, one that ‘just makes sense’, and this is provided by our ordinary spatial concepts. So if, for instance, the bullseye takes up 1/10 of the area of the dartboard then, if the dart is thrown randomly, it will have a 1/10 chance of landing there. (Again, this is really just what it means for the dart to be thrown randomly.) This isn’t the only possible measure, but it’s the one that, in some sense, ‘just makes sense.’ But the question is, is there a natural measure on the space of possible worlds? That is, is there some ‘correct’ or ‘sensible’ or ‘natural’ way of saying how ‘far apart’ two possible worlds are? This is far from clear. The Lewis-Stalnaker semantics for counterfactuals supposes that we can talk about some worlds being ‘closer together’ than others, but this is not enough to define a measure. Furthermore, Lewis, at least, thinks that the closeness of worlds might change based on contextual factors (which respects of similarity we most care about), so it seems like there’s a plurality of measures there. Perhaps one could claim that all of these reasonably natural measures agree in assigning nothing probability 0, but that’s not clear either. For instance, Leibniz seems to think that one reason why the existence of something cries out for explanation is that “a nothing is simpler and easier than a something” (“Principles of Nature and Grace,” tr. Woolhouse and Francks, sect. 7). So maybe we should adopt a measure in which worlds get lower probability the more complicated they are. (I think Swinburne might also have a view like this.) On this kind of view, the empty world (if there is such a world) will be the most probable world. So the plurality of measures seems like a problem.
It’s not the only problem, though. Kotzen notes that “the Lebesque measure can be defined only in spaces that can be represented as Euclidean n-dimensional real-valued spaces” (222). (The Lebesgue measure is the standard measure used, for instance, in the dart board case: the bigger space it takes up the bigger its measure.) But the space of possible worlds is not like this! David Lewis has argued that the cardinality of the space of possible worlds must be greater than the cardinality of the continuum (Plurality of Worlds, 118). The reason is relatively simple: suppose that it is possible that there should be a two-dimensional Euclidean space in which every point is either occupied or unoccupied. The set of possible patterns of occupied and unoccupied points in such a space (each representing a distinct possibility) will be larger than the continuum. But if this is right, then there can be no Lebesgue measure on the possible worlds because there are too many worlds. Even if this exact class of worlds is not really possible (for reasons such as the considerations about space in modern physics I raised last time) it seems likely that there are too many worlds for the space of possible worlds to have a Lebesgue measure. Yet Kotzen attributes to van Inwagen that view “that we ought to associate a proposition’s probability with its Lebesgue measure in the relevant space” (227).
Maybe van Inwagen is not in quite this much trouble. He doesn’t actually seem to say anything about a Lebesgue measure in the paper, so I’m not sure exactly why Kotzen thinks van Inwagen is committed to this. In fact, in the paper Kotzen is discussing, van Inwagen cites his earlier discussion in Daniel Howard-Snyder’s collection, The Evidential Argument from Evil. In endnote 3 (pp. 239-240) of that article, van Inwagen says “the notion of the measure of a set of worlds gets most of such content as it has from the intuitive notion of the proportion of logical psace that a set of worlds occupies.” I find it a little bit ironic that van Inwagen says this, because he’s always denying that he has intuitions about things! I don’t have intuitions about proportions of logical space. In any event, it seems to me that van Inwagen is here disavowing the project of giving a well-defined measure in the mathematician’s sense.
Suppose one did want to identify a natural measure that was well-defined in the mathematician’s sense. I’m not sure about all the technicalities of trying to do this for sets of larger-than-continuum cardinality, and whether it can be done at all. Even if it can, thought, it’s going to be hard to say that one measure is more intuitive or natural than another in such an exotic realm. Things might be even worse: Pruss thinks (PSR, p. 100) that, for any cardinality k, it is possible that there be k many photons. If this is true, then there is a proper class of possible worlds, and one certainly can’t define a measure on a proper class. (This is another thing I don’t think I have intuitions about.)
All this to say: anyone who wants to assign a priori probabilities to all propositions (as van Inwagen does) is fighting an uphill battle, but if such probabilities cannot be assigned, then it does not seem that the probabilistic pattern of explanation can be used to tell us why there is something rather than nothing.
(Cross-posted at blog.kennypearce.net)
In its classical formulation, Pascal’s Wager contends that we have something like the following payoff matrix:
God exists | No God | |
Believe | +∞ | −a |
Don’t believe | -b | c |
where a,b,c are finite. Alan Hajek, however, observes that it is incorrect to say that if you don’t choose to believe, then the payoff is finite. For even if you don’t now choose to believe, there is a non-zero chance that you will later come to believe, so the expected payoff whether you choose to believe or not is +∞.
Hajek’s criticism has the following unhappy upshot. Suppose that there is a lottery ticket that costs a dollar and has a 9/10 chance of getting you an infinite payoff. That’s a really good deal intuitively: you should rush out and buy the ticket. But the analogue to Hajek’s criticism will say that since there is a non-zero chance that you will obtain the ticket without buying it—maybe a friend will give it to you as a gift—the expected payoff is +∞ whether you buy or don’t buy. So there is no point to buying. So Hajek’s criticism leads to something counterintuitive here, though that won’t surprise Hajek. The point of this post is to develop a rigorous principled response to Hajek’s criticism entailing the intuition that you should go for the higher probability of an infinite outcome over a lower probability of it.
A gamble is a random variable on a probability space. We will consider gambles that take their values in R*=R∪{−∞,+∞}, where R is the real numbers. Say that gambles X and Y are disjoint provided that at no point in the probability space are they both non-zero. We will consider an ordering ≤ on gambles, where X≤Y means that Y is at least as good a deal as X. Write X<Y if X≤Y but not Y≤X. Then we can say Y is a strictly better deal than X. Say that gambles X and Y are probabilistically equivalent provided that for any (Borel measurable) set of values A, P(X∈A)=P(Y∈A). Here are some very reasonable axioms:
- ≤ is a partial preorder, i.e., transitive and reflexive.
- If X and Y are real valued and have finite expected values, then X≤Y if and only if E(X)≤E(Y).
- If X and Y are defined on the same probability space and X(ω)≤Y(ω) for every point ω, then X≤Y.
- If X and Y are disjoint, and so are W and Z, and if X≤W and Y≤Z, then X+Y≤W+Z. If further X<W, then X+Y<W+Z.
- If X and Y are probabilistically equivalent, then X≤Y and Y≤X.
For any random variable X, let X* be the random variable that has the same value as X where X is finite and has value zero where X is infinite (positively or negatively).
The point of the above axioms is to avoid having to take expected values where there are infinite payoffs in view.
Theorem. Assume Axioms 1-5. Suppose that X and Y are gambles with the following properties:
- P(X=+∞)<P(Y=+∞)
- P(X=−∞)≥P(Y=−∞)
- X* and Y* have finite expected values
Then: X<Y.
It follows that in the lottery case, as long as the probability of getting a winning ticket without buying is smaller than the probability of getting a winning ticket when buying, you should buy. Likewise, if choosing to believe has a greater probability of the infinite payoff than not choosing to believe, and has no greater probability of a negative infinite payoff, and all the finite outcomes are bounded, you should choose to believe.
Proof of Theorem: Say that an event E is continuous provided that for any 0≤x≤P(E), there is an event F⊆E with P(F)=x. By Axiom 5, without loss of generality {X∈A} and {Y∈A} are continuous for any (Borel measurable) A. (Proof: If necessary, enrich the probability space that X is defined on to introduce a random variable U uniformly distributed on [0,1] and independent of X. The enrichment will not change any gamble orderings by Axiom 5. Then if 0≤x≤P(X∈A), just choose a∈[0,1] such that aP(X∈A)=x and let F={X∈A&U≤a}. Ditto for Y.)
Now, given an event A and a random variable X, let AX be the random variable equal to X on A and equal to zero outside of A. Let A={X=−∞} and B={Y=−∞}. Define the random variables X_{1} and Y_{1} on [0,1] with uniform distribution by X_{1}(x)=−∞ if x≤P(A) and X_{1}(x)=0 otherwise, and Y_{1}(x)=−∞ if x≤P(B) and Y_{1}(x)=0 otherwise. Since P(A)≥P(B) by (7), it follows that X_{1}(x)≤Y_{1}(x) everywhere and so X_{1}≤Y_{1} by Axiom 3. But AX and BY are probabilistically equivalent to X_{1} and Y_{1} respectively, so by Axiom 5 we have AX≤BY. If we can show that A^{c}X<B^{c}Y then the conclusion of our Theorem will follow from the second part of Axiom 4.
Let X_{2}=A^{c}X and Y_{2}=B^{c}Y. Then P(X_{2}=+∞)<P(Y_{2}=+∞), X_{2}* and Y_{2}* have finite expected values and X_{2} and Y_{2} never have the value −∞. We must show that X_{2}≤Y_{2}. Let C={X_{2}=+∞}. By subdivisibility, let D be a subset of {Y_{2}=+∞} with P(D)=P(C). Then CX_{2} and DY_{2} are probabilistically equivalent, so CX_{2}≤DY_{2} by Axiom 5. Let X_{3}=C^{c}X_{2} and Y_{3}=D^{c}Y_{3}. Observe that X_{3} is everywhere finite. Furthermore P(Y_{3}=+∞)=P(Y_{2}=+∞)−P(X_{2}=+∞)>0.
Choose a finite N sufficiently large that NP(Y_{3}=+∞)>E(X_{3})−E(Y_{3}*) (the finiteness of the right hand side follows from our integrability assumptions). Let Y_{4} be a random variable that agrees with Y_{3} everywhere where Y_{3} is finite, but equals N where Y_{3} is infinite. Then E(Y_{4})=NP(Y_{3}=+∞)+E(Y_{3}*)>E(X_{3}). Thus, Y_{4}>X_{3} by Axiom 2. But Y_{3} is greater than or equal to Y_{4} everywhere, so Y_{3}≥Y_{4}. By Axiom 1 it follows that Y_{3}>X_{3}. but DY_{2}≥CX_{2} and X_{2}=CX_{2}+X_{3} and Y_{2}=DY_{2}+Y_{3}, so by Axiom 4 we have Y_{2}>X_{2}, which was what we wanted to prove.
One of our graduate students, Matt Wilson, suggested an analogy between Pascal’s Wager and the question about whether to promote or fight theistic beliefs in a social context (and he let me cite this here).
This made me think. (I don’t know what of the following would be endorsed by Wilson.) The main objections to Pascal’s Wager are:
- Difficulties in dealing with infinite utilities. That’s merely technical (I say).
- Many gods.
- Practical difficulties in convincing oneself to sincerely believe what one has no evidence for.
- The lack of epistemic integrity in believing without evidence.
- Would God reward someone who believes on such mercenary grounds?
- The argument just seems too mercenary!
Do these hold in the social context, where I am trying to decide whether to promote theism among others? If theistic belief non-infinitesimally increases the chance of other people getting infinite benefits, without any corresponding increase in the probability of infinite harms, then that should yield very good moral reason to promote theistic belief. Indeed, given utilitarianism, it seems to yield a duty to promote theism.
But suppose that instead of asking what I should do to get myself to believe the question is what I should try to get others to believe. Then there are straightforward answers to the analogue of (3): I can offer arguments for and refute arguments against theism, and help promote a culture in which theistic belief is normative. How far I can do this is, of course, dependent on my particular skills and social position, but most of us can do at least a little, either to help others to come to believe or at least to maintain their belief.
Moreover, objection (4) works differently. For the Wager now isn’t an argument for believing theism, but an argument for increasing the number of people who believe. Still, there is force to an analogue to (4). It seems that there is a lack of integrity in promoting a belief that one does not hold. One is withholding evidence from others and presenting what one takes to be a slanted position (for if one thought that the balance of the evidence favored theism, then one wouldn’t need any such Wager). So (4) has significant force, maybe even more force than in the individual case. Though of course if utilitarianism is true, that force disappears.
Objections (5) and (6) disappear completely, though. For there need be nothing mercenary about the believers any more, and the promoter of theistic beliefs is being unselfish rather than mercenary. The social Pascal’s Wager is very much a morally-based argument.
Objections (1) and (2) may not be changed very much. Though note that in the social context there is a hedging-of-the-bets strategy available for (2). Instead of promoting a particular brand of theism, one might instead fight atheism, leaving it to others to figure out which kind of theist they want to be. Hopefully at least some theists get right the brand of theism—while surely no atheist does.
I think the integrity objection is the most serious one. But that one largely disappears when instead of considering the argument for promoting theism, one considers the argument against promoting atheism. For while it could well be a lack of moral integrity to promote one-sided arguments, there is no lack of integrity in refraining from promoting one’s beliefs when one thinks the promotion of these beliefs is too risky. For instance, suppose I am 99.99% sure that my new nuclear reactor design is safe. But 99.9999% is just not good enough for a nuclear reactor design! I therefore might choose not promote my belief about the safety of the design, even with the 99.9999% qualifier, because politicians and reporters who aren’t good in reasoning about expected utilities might erroneously conclude not just that it’s probably safe (which it probably is), but that it should be implemented. And the harms of that would be too great. Prudence might well require me to be silent about evidence in cases where the risks are asymmetrical, as in the nuclear reactor case where the harm of people coming to believe that it’s safe when it’s unsafe so greatly outweighs the harm of people coming to believe that it’s unsafe when it’s safe. But the case of theism exhibits a similar asymmetry.
Thus, consistent utilitarian atheists will promote theism. (Yes, I think that’s a reductio of utilitarianism!) But even apart from utilitarianism, no atheist should promote atheism.
As I have argued elsewhere, it is very difficult to reconcile the idea that God intentionally designed human beings with the statistical explanations we would expect to see in a completed evolutionary theory. One might respond that our current evolutionary theory is not thus completed, but it would be nice to have a story that would fit even with a future completed theory. I now offer such a solution, albeit one I am not fond of.
Suppose first that God determines (either directly or mediately) every quantum event in the evolutionary history of human beings. Suppose further that physical reality is infinite, either spatially or temporally or in the multiverse way, in such wise that the quantum events in our evolutionary history can be arranged into a fairly natural infinite sequence and given frequentist probabilities
So far this is a simple and quite unoriginal solution. And it is insufficient. A standard problem with frequentist accounts is that they get the order of explanations wrong. It is central to a completed evolutionary story that the probabilistic facts explain the arising of human beings. But if the probabilistic facts are grounded in the sequence of events, as on frequentism they are, then they cannot explain what happens in that sequence of events. Some Humeans are happy to bite the bullet and accept circular explanations here, but I take the objection to be very serious.
However, theistic frequentism has a resource that bare frequentism does not. The theistic frequentist can make probability facts be grounded not in the frequencies of the infinite sequence of events as such, but in God’s intention to produce an infinite sequence of events with such-and-such frequencies and to do so under the description “an infinite sequence of events with such-and-such frequencies.” This requires God to have a reason to produce a sequence of events with such-and-such frequencies as such, but a reason is not hard to find–statistical order is a genuine kind of order and order is valuable.
The theistic frequentist now has much less of a circularity worry. It is not the infinite sequence of events that grounds the probabilities that are, in turn, supposed to explain the events within the evolutionary sequence. Rather, it is God’s intention to produce events with such-and-such frequencies that grounds the probabilities, and the events in the sequence can be non-circularly explained by their having frequencies that God had good reason (say, based on order) to produce.
I don’t think it an overstatement to say that the concept of the infinite plays a key role in the philosophy of religion. There are at least two senses in which ‘infinite’ is used. First, ‘infinite’ is often used to mean maximal, as in God’s infinite power, knowledge, and goodness. Second, many arguments in the philosophy of religion discuss ‘infinite number’ or ‘infinitely many’. It is this second sense of the infinite that I focus on in this post. Here are two recent examples of this second sense of the infinite, from Prosblogion, with select quotes (and links to the full posts):
The best naturalistic alternative to theistic explanations of fine-tuning is a multiverse where there are infinitely many variations on the constants in the laws of nature, generating infinitely many universes, such that in infinitely many of them there is life–and we only observe a universe where there is life. Typical multiverse theories are committed to:
- For any situation involving a finite number of observers, stochastically independent near-duplicates of that situation are found in infinitely many universes.
I will argue that if (1) is true, then ordinary probabilistic reasoning doesn’t work. But science is based on ordinary probabilistic reasoning, so any scientific argument that leads to the typical multiverse theories is self-defeating.
The argument that if (1) is true, then ordinary probabilistic reasoning doesn’t work is based on a thought experiment. You start by observing Jones roll a fair six-sided indeterministic die, but you don’t see how the die lands. You do, however, engage in ordinary probabilistic reasoning and assign probability 1/6 to his having rolled six.
Suddenly an angel gives you a grand vision: you see a countable infinity of Joneses, each rolling a die in a near-duplicate of the situation you just observed. You notice tiny differences between the Joneses, but each of them is rolling an approximately fair indeterministic die, and you are informed that all of these situations are stochastically independent.
Suppose Molinism is true. We know the truth values of some Molinist counterfactuals because we know that their antecedent and consequent are true. But we also have reason to believe many other Molinist counterfactuals. Absent further evidence, if P(A|C) is high, and C is an appropriate antecedent for a Molinist counterfactual C→A, that gives me reason to believe C→A. It certainly gives me reason to believe C→A if I know C is actually true; for if I know C is true, then if P(A|C) is high, P(A) will be fairly high as well, and so A is probably true, and hence C→A is probably true. But I also have reason to think C→A is true in cases where C is false. For instance, if Jones is the sort of person likely to accede to my minor requests, then I have reason to believe that were I to make such-and-such a minor request, he’d accede to it, and I have reason to believe the conditional whether or not I make the request (at least assuming Molinism is true so that the conditional has non-trivial truth-value).
This suggests that if the objective probability of A on C is high, then the objective probability of C→A is also high. So the Molinist conditional C→A, assuming it’s true, doesn’t seem to be a mere brute fact. It is a fact subject to meaningful probabilistic assignments. But if it’s not a mere brute fact, it seems reasonable to look for an explanation of it. What is that explanation?
Well, maybe we have a probabilistic explanation. Maybe the fact that C makes A probable explains why C→A. But this is weird. It seems that probabilistic explanation is a species of causal explanation (with probabilistic causation). But there is surely no causal explanation of why C→A, at least in worlds where C is not true. (What would the cause be? The truthmaker of C? But C is not true and has no truthmaker.)
I’ll leave it as a puzzle: How is a Molinist to explain the connection between P(A|C) and the probability of the conditional C→A?