The best naturalistic alternative to theistic explanations of fine-tuning is a multiverse where there are infinitely many variations on the constants in the laws of nature, generating infinitely many universes, such that in infinitely many of them there is life–and we only observe a universe where there is life. Typical multiverse theories are committed to:
- For any situation involving a finite number of observers, stochastically independent near-duplicates of that situation are found in infinitely many universes.
I will argue that if (1) is true, then ordinary probabilistic reasoning doesn’t work. But science is based on ordinary probabilistic reasoning, so any scientific argument that leads to the typical multiverse theories is self-defeating.
The argument that if (1) is true, then ordinary probabilistic reasoning doesn’t work is based on a thought experiment. You start by observing Jones roll a fair six-sided indeterministic die, but you don’t see how the die lands. You do, however, engage in ordinary probabilistic reasoning and assign probability 1/6 to his having rolled six.
Suddenly an angel gives you a grand vision: you see a countable infinity of Joneses, each rolling a die in a near-duplicate of the situation you just observed. You notice tiny differences between the Joneses, but each of them is rolling an approximately fair indeterministic die, and you are informed that all of these situations are stochastically independent.
- P(the universe has low entropy | naturalism) is extremely tiny.
- P(the universe has low entropy | theism) is not very small.
- The universe has low entropy.
- Therefore, the low entropy of the universe strongly confirms theism over naturalism.
Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.
Aficionados of the fine-tuning argument will be familiar with the normalizability problem presented by the McGrews and Vestrup in their (2001) Mind article. The normalizability problem is that one cannot make sense of probabilities within an infinite space of possibilities in which each possibility is equi-probable. Suppose, for illustration, that there is a lottery on the natural numbers. For each natural number it’s possible that it wins but no natural number has any greater chance of winning than any other natural number. If we assign each natural number some very small finite chance of winning then the total space of possibilities sums to a probability greater than 1 (which is to say that the space of possibilities isn’t normalizable). Because talk of probabilities makes sense only if the total outcome space equals 1 then we can’t make sense of probabilities in this case. One move here is to deny countable additivity, the claim that you can sum probabilities in an infinite space. Another move is to introduce infinitesimals to recover a positive probability without denying countable additivity. Yet another move is to hold that the space of possibilities is uneven in terms of probabilities. The basic idea is that the probabilities in an infinite range is curved and not a straight line. I don’t want to talk about any of these moves. Instead I want to focus on a curious result that arises from the normalizability problem. Hence the title of the post: the Normalizality Problem problem (or, the NP problem). To begin let’s step back a bit and ask why the fine-tuning argument is a fairly recent newcomer among the catalog of arguments for theism. The basic idea is that prior to the scientific developments in the beginning of the 20th century the universe as a whole was conceived to be too vague to present any arguments from its nature. One couldn’t sensibly talk about specific properties of the universe as a whole and thus there was no sense to be made of the universe as a whole being fine-tuned. But all that changed with the discovery that the universe was expanding and that the initial state of the universe must have had very specific properties and was governed by specific laws with various mathematical constants. Thus, it became appropriate to consider why the initial conditions of the universe and the constants of the laws had the specific values it in fact had. These developments gave rise to the fine-tuning argument and also lots of concern among physicists to find a more fundamental theory to remove some of the improbability that just this ensemble of conditions and constants occurs. In short, it looks like there’s a significant change in our epistemic position vis-Ã -vis the nature of the universe as a whole. Prior to the scientific developments in the 20th century the nature of the universe as a whole was too vague to give rise to probability intuitions, but after those developments it was.
Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it’s not. It it’s too vague then the nature of the universe can’t be used in probability arguments. But, if the universe isn’t too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can’t appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can’t be the used to evoke probability considerations. That’s the NP problem. It looks like it’s simply too strong because it provides the basis for an a priori argument that the nature of the universe can’t be used to evoke probability considerations.
At the end of his discussion of fine-tuning arguments, Sobel briefly, and somewhat indirectly, discusses issues arising from attempts to combine theism with modern cosmology (pp. 285-287). In particular, many cosmologists now believe that the fundamental constants of nature were set by quantum fluctuations in the early universe. Stephen Hawking has suggested that such fluctuations might be very likely to produce a world like ours. If correct, the thought goes, this would undermine the fine-tuning argument. However, it would also do something more: if the laws of nature make it very likely, but not certain, that a world like ours, capable of supporting life, will come into being, this is a fact that theists will have difficulty explaining. Why did God make probabilistic laws? Would God have intervened if the early fluctuations had gone otherwise? If so, then why didn’t he set things up so as to ensure that they came off right without intervention? If not, why did he decide to take this risk? In short, can we make sense of the idea of a god – a literal god, not a figure of speech – who ‘plays dice’?
Here I think Sobel, following Quentin Smith, has put his finger on what is perhaps the deepest and most interesting question at the intersection of theology and modern science. (Certainly it is deeper and more interesting than any issue raised by evolution.) However, Sobel fails to mention any possible solution to it. I find it hard to believe that he can’t think of any possible solution: there’s one right under his nose. One of the main historical figures Sobel has been discussing is Hume, and it seems that if we can develop a Humean (descriptive) theory of natural law which is able to deal with probabilistic laws, then we will have solved, or at least greatly mitigated, the problem. (Of course, this is no easy task!) On such a descriptivist reading, it is possible that although the probabilistic laws of quantum mechanics are the laws of our world, God chooses how each wave-function collapses, and chooses for reasons. All that is required is that those reasons do not lead to regularities of the sort that could displace the quantum laws as the laws of our world. For instance, we might make it a principle of our descriptive theory of laws that physical laws are not teleological. Thus if God chooses that the wave-function collapse a certain way in order that intelligent beings later arise, this does not threaten to replace quantum mechanics with a different set of physical laws.
Note that this is not a ‘hidden variable’ theory: hidden variable theories say that there is some more fundamental, deterministic physical law behind quantum mechanics, whereas on this view quantum mechanics (or we should rather say: whatever theory of quantum gravity physicists eventually work out, which will no doubt still be indeterministic) is the most fundamental physical law; it’s just that there are deeper explanations than physical laws.
This is just one solution. There are probably others. But I was impressed that Sobel pointed out the difficulty, because I think that it is one of the deepest and most interesting difficulties contemporary theists face, and it is far too often ignored.
[Cross-posted at blog.kennypearce.net]
The October issue of Analysis is now available on line. It has an article “Fine-tuning is not surprising” by one Cory Juhl. I’d never heard of him so I looked him up on the UT Department website and he’s got good training in HPS.
I only perused the article, but I couldn’t find anything very original upon perusal. Some of it seems like a less-precise rehash of some of Brad Monton’s points discussed previously on the blog. One complaint that I very much share is the lack of any semantics offered for the kind of probability which is supposed to be at work in the argument. I’ve been slaving away at range theories of probability and point-set topology and re-reading Carnap trying to come up with something, but it’s very difficult. Still, I’m less worried about logical probability measures over infinities than I was when I started.
Though I’m personally vexed by this, I’m not sure how big a problem it is for the argument from fine-tuning. I can see one sticking to the intuitive judgements reasonably without being in possession of the mechanics. I’ve discussed this problem with Richard Swinburne and he’s certainly not interested in it. This is what he said in one email: “This notion of evidential probability, like- for example – the notion of cause, is so basic that any philosopher’s attempt to give a precise definition of it in other terms is unlikely to capture its nature adequately.”
He thinks the judgements themselves are intuitively correct. One might say that the intuitive judgements are in fact the standards according to which the theories should be judged (or at least relatively fixed points in a reflective equilibrium). A philosopher wants more, of course, but we might have to live without it.
I’ll be presenting my own criticisms of the fine-tuning argument at the upcoming SLU Religious Epistemology Conference which I’m very much looking forward to.