Infinite multiverse, fine-tuning and probability
March 6, 2013 — 14:38

Author: Alexander Pruss  Category: Uncategorized  Tags: , , ,   Comments: 64

The best naturalistic alternative to theistic explanations of fine-tuning is a multiverse where there are infinitely many variations on the constants in the laws of nature, generating infinitely many universes, such that in infinitely many of them there is life–and we only observe a universe where there is life. Typical multiverse theories are committed to:

  1. For any situation involving a finite number of observers, stochastically independent near-duplicates of that situation are found in infinitely many universes.

I will argue that if (1) is true, then ordinary probabilistic reasoning doesn’t work. But science is based on ordinary probabilistic reasoning, so any scientific argument that leads to the typical multiverse theories is self-defeating.

The argument that if (1) is true, then ordinary probabilistic reasoning doesn’t work is based on a thought experiment. You start by observing Jones roll a fair six-sided indeterministic die, but you don’t see how the die lands. You do, however, engage in ordinary probabilistic reasoning and assign probability 1/6 to his having rolled six.

Suddenly an angel gives you a grand vision: you see a countable infinity of Joneses, each rolling a die in a near-duplicate of the situation you just observed. You notice tiny differences between the Joneses, but each of them is rolling an approximately fair indeterministic die, and you are informed that all of these situations are stochastically independent.

more…

Physicist Sean Carroll on God and Modern Physics
September 12, 2012 — 15:50

Author: Kenny Pearce  Category: Existence of God Links  Tags: , , , , ,   Comments: 10

I want to draw Prosblogion readers’ attention to a very interesting paper by CalTech physicist Sean Carroll, “Does the Universe Need God?” (hat tip: ex-apologist). The article is to be published in The Blackwell Companion to Science and Christianity. The article is a model of constructive dialog between philosophy and physics. Carroll shows engagement with the major philosophical arguments under discussion, and does not come off as condescending or dismissive. He also provides concise and helpful summaries of the relevant physics. Additionally, the article shows an admirable degree of epistemic humility, noting that there are many unsolved problems in physics and that our theory of the early universe is not polished and completed, while still arguing that we have enough information to shape our views on origins. The article is quite readable, and would certainly be helpful for students.
Let me make a few remarks on Carroll’s actual arguments and positions. Near the beginning of the article, Carroll quickly summarizes the possible responses to ‘first cause’-type cosmological arguments. It seems to me that he is on firm ground here: it is unclear whether there even is a first moment, and if there is then it is not clear that it even makes sense to ask what caused the state of the universe in that first moment, if we are looking for another cause in the series of causes. Besides (although Carroll does not make this point), classical philosophical theology does not conceive of God as one more cause in the series of causes. So the first cause argument isn’t really going anywhere. I myself think that insofar as the first cause argument is tempting, this is because it gets confused with the argument from contingency: people aren’t really asking what caused the first state of the universe, they are asking why was the state of the universe as it was, and it’s quite clear that, if there really is a first moment, then the answer to that question could not possibly be another ordinary physical cause: either it has no answer, or it has an answer of a very different sort.
Carroll next offers detailed criticism of the ‘fine-tuning’ argument. The main point Carroll makes here is that the multiverse hypotheses which physicists take seriously are not just introducing enormous numbers of universes as ad hoc posits for the purpose of getting rid of fine-tuning. One sort of multiverse, for instance, falls neatly out of inflationary cosmology, which is a well-verified physical theory. (Brian Greene’s latest book, The Hidden Reality, surveys the range of multiverse theories and the different degrees of evidence for them.) So to say that the multiverse is excessively complex and so should be rejected is to misunderstand the sort of simplicity we should be looking for. Now, Carroll runs over some distinctions between different multiverse theories here; my understanding on the basis of Greene’s book is that the multiverse theories that do the most to eliminate fine-tuning are the least well-supported and widely accepted among those on offer, and that it is true of some of these theories that their main attraction for their adherents is to get rid of fine-tuning. I’m not, however, convinced that that’s bad: apparent fine-tuning is one of the things physicists try to explain. If a particular multiverse hypothesis provides a simple explanation of a particular apparent fine-tuning, then good for it. And I agree with Carroll on what simplicity should mean here. Leibniz said that God would create the world which was simplest in principles and most varied in phenomena (see, e.g., DM 5). This is the kind of simplicity that matters here: simplicity of the fundamental principles. If they generate many and varied phenomena (e.g. an enormous variety of universes), this is no stroke against them. Again, point Carroll.
Near the end of the article, Carroll does come to discuss the argument from contingency. Unfortunately, he does not, in my view, take it as seriously as it deserves. He essentially says that, although we ought always to look for explanations with respect to things in the universe, there can be no such explanation of the universe as a whole or its most basic laws. In The Principle of Sufficient Reason: A Reassessment Alexander Pruss makes the case that the PSR cannot be restricted in any non-ad hoc way without undermining the assumptions of explainability made in ordinary scientific practice. Carroll ultimately simply pronounces that “There is no reason, within anything we currently understand about the ultimate structure of reality, to think of the existence and persistence and regularity of the universe as things that require external explanation.” He doesn’t give an adequate account of exactly what restrictions he is placing on explainability, or how they are justified. He seems to be supposing that what things we take to be in need of explanation depends on our physical theory. The trouble is, our practices with respect to explanation must be at least partly a priori in character: we have to start looking for explanations before we’ve got any explanations. Furthermore, Carroll’s example, that in modern physics there is no need for Aristotle’s Prime Mover because of the Law of Inertia, neglects the fact that an appeal to the Law of Inertia is itself an explanation of why objects continue in their state of motion. It is not that we’ve discovered that these things don’t need explanation, but rather that we’ve discovered that the correct explanation is of a very different sort from what Aristotle had in mind.
The argument from contingency, however, takes God outside the realm of physics. God here provides a different kind of explanation to a different kind of problem. This, to my mind, is one of the key reasons why the argument from contingency and the ontological argument are far more credible than either the first cause argument or the fine-tuning argument. That theism is not a credible physical theory is transparently obvious. Whether it is a credible metaphysical theory is another question entirely. I also note that the standards of credibility for metaphyiscal theories are quite lax compared to those for physical theories. Might theism enjoy the same level of (objective) support as quantum field theory? Not a chance. Might it enjoy the same level of (objective) support as (say) our best theories of universals? On this latter point I would say, it can, and it does.

A simple design argument
December 1, 2010 — 16:35

Author: Alexander Pruss  Category: Existence of God  Tags: , , , ,   Comments: 43
  1. P(the universe has low entropy | naturalism) is extremely tiny.
  2. P(the universe has low entropy | theism) is not very small.
  3. The universe has low entropy.
  4. Therefore, the low entropy of the universe strongly confirms theism over naturalism.

Low-entropy states have low probability. So, (1) is true. The universe, at the Big Bang, had a very surprisingly low entropy. It still has a low entropy, though the entropy has gone up. So, (3) is true. What about (2)? This follows from the fact that there is significant value in a world that has low entropy and given theism God is not unlikely to produce what is significantly valuable. At least locally low entropy is needed for the existence of life, and we need uniformity between our local area and the rest of the universe if we are to have scientific knowledge of the universe, and such knowledge is valuable. So (2) is true. The rest is Bayes.

more…

The Normalizability Problem problem
October 25, 2010 — 10:38

Author: Ted Poston  Category: Existence of God General  Tags: , ,   Comments: 5

Aficionados of the fine-tuning argument will be familiar with the normalizability problem presented by the McGrews and Vestrup in their (2001) Mind article. The normalizability problem is that one cannot make sense of probabilities within an infinite space of possibilities in which each possibility is equi-probable. Suppose, for illustration, that there is a lottery on the natural numbers. For each natural number it’s possible that it wins but no natural number has any greater chance of winning than any other natural number. If we assign each natural number some very small finite chance of winning then the total space of possibilities sums to a probability greater than 1 (which is to say that the space of possibilities isn’t normalizable). Because talk of probabilities makes sense only if the total outcome space equals 1 then we can’t make sense of probabilities in this case. One move here is to deny countable additivity, the claim that you can sum probabilities in an infinite space. Another move is to introduce infinitesimals to recover a positive probability without denying countable additivity. Yet another move is to hold that the space of possibilities is uneven in terms of probabilities. The basic idea is that the probabilities in an infinite range is curved and not a straight line. I don’t want to talk about any of these moves. Instead I want to focus on a curious result that arises from the normalizability problem. Hence the title of the post: the Normalizality Problem problem (or, the NP problem). To begin let’s step back a bit and ask why the fine-tuning argument is a fairly recent newcomer among the catalog of arguments for theism. The basic idea is that prior to the scientific developments in the beginning of the 20th century the universe as a whole was conceived to be too vague to present any arguments from its nature. One couldn’t sensibly talk about specific properties of the universe as a whole and thus there was no sense to be made of the universe as a whole being fine-tuned. But all that changed with the discovery that the universe was expanding and that the initial state of the universe must have had very specific properties and was governed by specific laws with various mathematical constants. Thus, it became appropriate to consider why the initial conditions of the universe and the constants of the laws had the specific values it in fact had. These developments gave rise to the fine-tuning argument and also lots of concern among physicists to find a more fundamental theory to remove some of the improbability that just this ensemble of conditions and constants occurs. In short, it looks like there’s a significant change in our epistemic position vis-à-vis the nature of the universe as a whole. Prior to the scientific developments in the 20th century the nature of the universe as a whole was too vague to give rise to probability intuitions, but after those developments it was.
Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it’s not. It it’s too vague then the nature of the universe can’t be used in probability arguments. But, if the universe isn’t too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can’t appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can’t be the used to evoke probability considerations. That’s the NP problem. It looks like it’s simply too strong because it provides the basis for an a priori argument that the nature of the universe can’t be used to evoke probability considerations.

Modern Cosmology and Theology
October 8, 2010 — 21:40

Author: Kenny Pearce  Category: Existence of God  Tags: , , , ,   Comments: 14

At the end of his discussion of fine-tuning arguments, Sobel briefly, and somewhat indirectly, discusses issues arising from attempts to combine theism with modern cosmology (pp. 285-287). In particular, many cosmologists now believe that the fundamental constants of nature were set by quantum fluctuations in the early universe. Stephen Hawking has suggested that such fluctuations might be very likely to produce a world like ours. If correct, the thought goes, this would undermine the fine-tuning argument. However, it would also do something more: if the laws of nature make it very likely, but not certain, that a world like ours, capable of supporting life, will come into being, this is a fact that theists will have difficulty explaining. Why did God make probabilistic laws? Would God have intervened if the early fluctuations had gone otherwise? If so, then why didn’t he set things up so as to ensure that they came off right without intervention? If not, why did he decide to take this risk? In short, can we make sense of the idea of a god – a literal god, not a figure of speech – who ‘plays dice’?
Here I think Sobel, following Quentin Smith, has put his finger on what is perhaps the deepest and most interesting question at the intersection of theology and modern science. (Certainly it is deeper and more interesting than any issue raised by evolution.) However, Sobel fails to mention any possible solution to it. I find it hard to believe that he can’t think of any possible solution: there’s one right under his nose. One of the main historical figures Sobel has been discussing is Hume, and it seems that if we can develop a Humean (descriptive) theory of natural law which is able to deal with probabilistic laws, then we will have solved, or at least greatly mitigated, the problem. (Of course, this is no easy task!) On such a descriptivist reading, it is possible that although the probabilistic laws of quantum mechanics are the laws of our world, God chooses how each wave-function collapses, and chooses for reasons. All that is required is that those reasons do not lead to regularities of the sort that could displace the quantum laws as the laws of our world. For instance, we might make it a principle of our descriptive theory of laws that physical laws are not teleological. Thus if God chooses that the wave-function collapse a certain way in order that intelligent beings later arise, this does not threaten to replace quantum mechanics with a different set of physical laws.
Note that this is not a ‘hidden variable’ theory: hidden variable theories say that there is some more fundamental, deterministic physical law behind quantum mechanics, whereas on this view quantum mechanics (or we should rather say: whatever theory of quantum gravity physicists eventually work out, which will no doubt still be indeterministic) is the most fundamental physical law; it’s just that there are deeper explanations than physical laws.
This is just one solution. There are probably others. But I was impressed that Sobel pointed out the difficulty, because I think that it is one of the deepest and most interesting difficulties contemporary theists face, and it is far too often ignored.
[Cross-posted at blog.kennypearce.net]

Juhl: Fine-tuning is not surprising
September 15, 2006 — 17:18

Author: Trent Dougherty  Category: Existence of God  Tags: ,   Comments: 3

The October issue of Analysis is now available on line. It has an article “Fine-tuning is not surprising” by one Cory Juhl. I’d never heard of him so I looked him up on the UT Department website and he’s got good training in HPS.
I only perused the article, but I couldn’t find anything very original upon perusal. Some of it seems like a less-precise rehash of some of Brad Monton’s points discussed previously on the blog. One complaint that I very much share is the lack of any semantics offered for the kind of probability which is supposed to be at work in the argument. I’ve been slaving away at range theories of probability and point-set topology and re-reading Carnap trying to come up with something, but it’s very difficult. Still, I’m less worried about logical probability measures over infinities than I was when I started.
Though I’m personally vexed by this, I’m not sure how big a problem it is for the argument from fine-tuning. I can see one sticking to the intuitive judgements reasonably without being in possession of the mechanics. I’ve discussed this problem with Richard Swinburne and he’s certainly not interested in it. This is what he said in one email: “This notion of evidential probability, like- for example – the notion of cause, is so basic that any philosopher’s attempt to give a precise definition of it in other terms is unlikely to capture its nature adequately.”
He thinks the judgements themselves are intuitively correct. One might say that the intuitive judgements are in fact the standards according to which the theories should be judged (or at least relatively fixed points in a reflective equilibrium). A philosopher wants more, of course, but we might have to live without it.
I’ll be presenting my own criticisms of the fine-tuning argument at the upcoming SLU Religious Epistemology Conference which I’m very much looking forward to.