The Normalizability Problem problem
October 25, 2010 — 10:38

Author: Ted Poston  Category: Existence of God General  Tags: , ,   Comments: 5

Aficionados of the fine-tuning argument will be familiar with the normalizability problem presented by the McGrews and Vestrup in their (2001) Mind article. The normalizability problem is that one cannot make sense of probabilities within an infinite space of possibilities in which each possibility is equi-probable. Suppose, for illustration, that there is a lottery on the natural numbers. For each natural number it’s possible that it wins but no natural number has any greater chance of winning than any other natural number. If we assign each natural number some very small finite chance of winning then the total space of possibilities sums to a probability greater than 1 (which is to say that the space of possibilities isn’t normalizable). Because talk of probabilities makes sense only if the total outcome space equals 1 then we can’t make sense of probabilities in this case. One move here is to deny countable additivity, the claim that you can sum probabilities in an infinite space. Another move is to introduce infinitesimals to recover a positive probability without denying countable additivity. Yet another move is to hold that the space of possibilities is uneven in terms of probabilities. The basic idea is that the probabilities in an infinite range is curved and not a straight line. I don’t want to talk about any of these moves. Instead I want to focus on a curious result that arises from the normalizability problem. Hence the title of the post: the Normalizality Problem problem (or, the NP problem). To begin let’s step back a bit and ask why the fine-tuning argument is a fairly recent newcomer among the catalog of arguments for theism. The basic idea is that prior to the scientific developments in the beginning of the 20th century the universe as a whole was conceived to be too vague to present any arguments from its nature. One couldn’t sensibly talk about specific properties of the universe as a whole and thus there was no sense to be made of the universe as a whole being fine-tuned. But all that changed with the discovery that the universe was expanding and that the initial state of the universe must have had very specific properties and was governed by specific laws with various mathematical constants. Thus, it became appropriate to consider why the initial conditions of the universe and the constants of the laws had the specific values it in fact had. These developments gave rise to the fine-tuning argument and also lots of concern among physicists to find a more fundamental theory to remove some of the improbability that just this ensemble of conditions and constants occurs. In short, it looks like there’s a significant change in our epistemic position vis-à-vis the nature of the universe as a whole. Prior to the scientific developments in the 20th century the nature of the universe as a whole was too vague to give rise to probability intuitions, but after those developments it was.
Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it’s not. It it’s too vague then the nature of the universe can’t be used in probability arguments. But, if the universe isn’t too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can’t appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can’t be the used to evoke probability considerations. That’s the NP problem. It looks like it’s simply too strong because it provides the basis for an a priori argument that the nature of the universe can’t be used to evoke probability considerations.

Comments:
  • I don’t know that the a priori argument works. For it could have turned out that the constants in the laws of nature instead of taking values in an unbounded domain, like the real numbers or the positive reals or the like, take values in a bounded domain that has a natural probability measure. For instance, it could have turned out that the fundamental constants in the laws of nature are all angles, and hence have a natural uniform distribution on the interval from 0 to 2 pi.

    October 26, 2010 — 10:35
  • Ted Poston

    Interesting idea. Is there any precedent for values of this kind in the fundamental laws of nature? One concern is that if the values of the constants in the *fundamental* laws fall within a bounded domain then they are not really fundamental. A sign of fundamentality seems to be unlimited variation in its constants. But, in any case, the normalizability problem would still show that *if* the constants of the laws take values in an unbounded domain then the nature of the universe can’t be used to evoke probability intuitions. That’s still *too* strong.

    October 26, 2010 — 13:16
  • So the NPP essentially says that we don’t (cannot?) even have enough knowledge of possible worlds to state the NP, let alone the fine-tuning argument? Very interesting point.
    For instance, it could have turned out that the fundamental constants in the laws of nature are all angles, and hence have a natural uniform distribution on the interval from 0 to 2 pi.
    Any fundamental constant can be re-expressed as an angle. For example, you could take C’ = artan(C) and replace the descriptor C with tan(C’) in your equations. Artificial to human eyes but perfectly valid.
    I’m not sure why this would make the uniform distribution any more obvious as a choice of measure. In my example, C’ could never equal pi/2 (this would imply that C was infinite). So the uniform distribution is clearly not appropriate in this case.

    October 27, 2010 — 2:18
  • Ted:
    Angle-valued variables do occur a fair amount in physics. For instance, phase differences between two oscillators of the same frequency.
    It could even turn out that the fundamental constants are some finite number of parameters, specifying some algebraic structure that only makes sense for a finite number of parameters.
    Lifewish:
    We can reparametrize in all sorts of ways. But some parametrizations are more natural than others. If we cannot say that some parametrizations are more natural than others, then by the same token we cannot solve the curve-fitting problem, and we have undercut science.
    What parametrization is more natural may depend on all sorts of things–what sorts of symmetries are relevant in regard to this parameter, what the relevant equations of motion are, etc.

    October 27, 2010 — 15:26
  • https://www.google.com/accounts/o8/id?id=AItOawmMsLtDsoOIyw_jwD7FQegptCii1IrHmg8

    Alexander: It makes no difference at all if the parameter space is bounded or unbounded. Any specific value (of, say, an angle) is a set of measure zero in the parameter space. So you still can’t assign any finite probability to any specific value.
    The only way around this would be if the parameter space were discrete: if, for example, only integer values in some finite range were possible. But there are very few cases in physics where parameters are naturally restricted to integer values.

    October 28, 2010 — 11:30