# The Normalizability Problem problem

Now for the NP problem: suppose some 18th century mathematician advanced for his age presented the following a priori argument that the nature of the universe could never be used to evoke probability considerations. Either the universe as a whole is too vague to be the proper object of thought or it's not. It it's too vague then the nature of the universe can't be used in probability arguments. But, if the universe isn't too vague then we must have some grip on the universe having specific conditions and/or laws with various fundamental constants. But in this latter case we still can't appeal to the nature of the universe to evoke probability considerations because the space of possibilities for the conditions and constants is infinite (and seems to be equi-probable). So no matter how you look at it the universe as a whole can't be the used to evoke probability considerations. That's the NP problem. It looks like it's simply too strong because it provides the basis for an a priori argument that the nature of the universe can't be used to evoke probability considerations.

I don't know that the a priori argument works. For it could have turned out that the constants in the laws of nature instead of taking values in an unbounded domain, like the real numbers or the positive reals or the like, take values in a bounded domain that has a natural probability measure. For instance, it could have turned out that the fundamental constants in the laws of nature are all angles, and hence have a natural uniform distribution on the interval from 0 to 2 pi.

So the NPP essentially says that we don't (cannot?) even have enough knowledge of possible worlds to state the NP, let alone the fine-tuning argument? Very interesting point.

For instance, it could have turned out that the fundamental constants in the laws of nature are all angles, and hence have a natural uniform distribution on the interval from 0 to 2 pi.

Any fundamental constant can be re-expressed as an angle. For example, you could take C' = artan(C) and replace the descriptor C with tan(C') in your equations. Artificial to human eyes but perfectly valid.

I'm not sure why this would make the uniform distribution any more obvious as a choice of measure. In my example, C' could never equal pi/2 (this would imply that C was infinite). So the uniform distribution is clearly not appropriate in this case.

Ted:

Angle-valued variables do occur a fair amount in physics. For instance, phase differences between two oscillators of the same frequency.

It could even turn out that the fundamental constants are some finite number of parameters, specifying some algebraic structure that only makes sense for a finite number of parameters.

Lifewish:

We can reparametrize in all sorts of ways. But some parametrizations are more natural than others. If we cannot say that some parametrizations are more natural than others, then by the same token we cannot solve the curve-fitting problem, and we have undercut science.

What parametrization is more natural may depend on all sorts of things--what sorts of symmetries are relevant in regard to this parameter, what the relevant equations of motion are, etc.

Alexander: It makes no difference at all if the parameter space is bounded or unbounded. Any specific value (of, say, an angle) is a set of measure zero in the parameter space. So you still can't assign any finite probability to any specific value.

The only way around this would be if the parameter space were discrete: if, for example, only integer values in some finite range were possible. But there are very few cases in physics where parameters are naturally restricted to integer values.

### Archives

 Blog: Prosblogion Topics: Follow my blog