# Pascal's wager for universalists

Let Egalitarian Universalism (EU) be the doctrine that God exists and gives everyone infinite happiness, and that the quantity fo this happiness is the same for everyone. The traditional formulation of Pascal's Wager obviously does not work in the case of the God of EU. What is surprising, however, is that one can make Pascal's Wager work even given the God of EU if one thinks that Bayesian decision theory, and hence one-boxing, is the right way to go in the case of Newcomb's Paradox with a not quite perfect predictor (i.e., Nozick's original formulation).

Here is how the trick works. Suppose that the only two epistemically available options are EU and atheism, and I need to decide whether or not to believe in God. Given Bayesian decision theory, I should choose whether to believe based on the conditional expected utilities. I need to calculate:

1. U1=rP(EU|believe) + aP(atheism|believe)
2. U2=rP(EU|~believe) + bP(atheism|~believe)
where r is the infinite positive reward that EU guarantees everybody, and a and b are the finite goods or bads of this life available if atheism is true. If U1 is greater than U2, then I should believe.

We'll need to use our favorite form of non-standard analysis for handling infinities. Observe that

1. P(believe|EU)>P(believe|~EU),
since a God would be moderately to want people to believe in him, and hence it is somewhat more likely that there would be theistic belief if God existed than if atheism were true (and I assumed that atheism and EU are the only options). But then by Bayes' Theorem it follows from (3) that:
1. P(EU|believe)>P(EU|~believe).
Let c=P(EU|believe)-P(EU|~believe). By (4), c is a positive number. Then:
1. U1U2=rc + something finite.
Since r is infinite and positive, it follows that U1U2>0, and hence U1>U2, so I should believe in EU.

The argument works on non-egalitarian universalism, too, as long as we don't think God gives an infinitely greater reward to those who don't believe in him.

(However, universalism is false and one-boxing is mistaken.)

I left a similar comment over on your blog, Alex, but I'd be curious to know where this goes wrong.

1.U1=rP(EU|believe) + aP(atheism|believe)
2.U2=rP(EU|~believe) + bP(atheism|~believe)

Since we're assuming non-standard infinities, we know that subtracting one infinite quantity from another might equal some finite quantity. So, suppose rP(EU|believe) = oo + n and let rP(EU|~believe) = oo. Suppose further that aP(atheism|believe) = m and bP(atheism|~believe) = s. So long as (s - m) > n, it looks like you ought not to dispose yourself to believe.

In non-standard analysis, you can factor just as with ordinary numbers. So, rP(EU|believe)-rP(EU|~believe)=r(P(EU|believe)-P(EU|~believe)). If r is a positive infinity, and P(EU|believe)-P(EU|~believe) is a positive number, the product is positive.

Right, of course, I see that this is part of your reasoning. But surely rP(EU|believe)-rP(EU|~believe) might be finite on non-standard analyses. I take it we agree about that. But then it must be true that r(P(EU|believe)-P(EU|~believe) might be finite. If the latter cannot be finite, then rP(EU|believe)-rP(EU|~believe) can't be finite. But that can't be right. Certainly on non-standard analyses I can subtract and infinite quantity from an infinite quantity and get a finite quantity. Really, this is the whole point of going non-standard.

Right, unless of course F = 0. In any event, that doesn't differ much from the standard account, except (presumably) that (non-standardly), [rP(EU|believe)-rP(EU|~believe)]

There are some interesting problems in the neighborhood of this one. As I mentioned, when the predictor is perfect (or the correlation is perfect) I'm pretty sure one-boxing is right. On the other hand, one-boxing is not so difficult to defend when the correlation is nearly perfect. Change the case slightly so that P in U1 is nearly perfect, say, .999, and P' in U2 is about .10. Replace EU with EU* according to which your choice alone determines whether everyone enjoys eternal happiness or does not. Now it looks like you morally ought to dispose yourself to believe. It might be true that in some cases, rational decision is likely not to payoff for the rational agent. But it is hard to believe that a moral agent can act in ways that he knows will very likely cost others dearly when, at absolutely no moral cost, he can act in ways that will very likely benefit others greatly. But then do we conclude that moral decision is not rational?

It seems to me that if you go for one-boxing if the correlation is nearly perfect, and yet you are sure there is no backwards causation or anything like that, then you need to go for full-blown Bayesian decision theory

No, I think two-boxing is right in many cases, but not obviously in all. In the moral case, which I am addressing here, it is hard to make sense of the causal view that rationality does not in general pay. Causal theorists just admit that they're going to lose big in Newcomb problems, but they don't see that as problematic for a theory of rationality. Rational behavior sometimes pulls apart from behavior that pays off, say they, contrary to Gauthier-types. I say, it might not be a problem for a theory of rationality, but it is a problem when you're facing Newcomb problems with moral implications. So, it looks like causal theorists are committed to saying that acting morality isn't rational. Maybe that's a consequence you can live with.

As far as I can see, the points here are largely red herrings, and pretty easy for the evidential theorist to handle. There is middle knowledge, there are no unpredictable one or two boxers, etc. So there's nothing to worry about here.

But, then, what about Moral Newcomb Problems (MNP)? A causal theorist in a MNP faces the decision to one-box or two-box where the payoff to everyone for one-boxing is a .999 chance at avoiding a very serious harm and the payoff to everyone of two-boxing is .999 of receiving a very serious harm.

Now in an MNP a causal theorist must (absurdly) choose to two-box, giving everyone a .999 chance at being very seriously harmed, when he could have given everyone a .999 chance of avoiding that harm. The recommendation to two-box here is close to a reductio. A causal theorist must otherwise conclude that moral action is not rational. Very bad news either way.

...it isn't, I think, the case that had the causal theorist one-boxed, the better result would have taken place.

The fact is this.

One-box: we all have a 99.9% chance of avoiding a serious harm.

Two-Box: we all have a 99.9% chance of suffering a serious harm (&, say, you for certain get a minor benefit).

If you choose to two-box, you've done something seriously wrong. But causal decision theory tells you to two-box. Indeed, it will tell you that it's also the best moral choice in this situation, and not merely the best rational choice.

Alex,

Play the game 1000 times in W. If you choose to two-box each time in W, it will be true that we all suffer 999 times (make it a billion or trillion times if you want to ensure the right frequencies of suffering). If you choose one-box in W, it will be true that we avoid suffering 999 times in W. Clearly you have acted in ways that increase not just the probability of suffering, but that increase the total amount of suffering in W. And you could have acted in ways that decreased the total suffering. This is a real problem for moral versions of newcomb. It is obvious that we should not act in ways that increase the chances of overall suffering.

I'm actually not sure why you're arguing to the contrary. I don't know a causal decision theorist who denies that, in Newcomb problems, causal decision theory is very costly (not just probably costly). You're two-boxing is unwelcomed news because it tells you that our chances of suffering have just gone up severely. So don't two-box.

I don't see that you could have acted in ways that decreased total suffering. Had you one-boxed instead each time, the predictor would have been wrong each time, and you would have got a result that was even worse than you got by two-boxing. Do you agree about this counterfactual being true but dispute the consequences I draw from it, or do you dispute my claim that the counterfactual is true? (Except in the backwards causation, etc., cases, where of course you should one-box, and the causal theorist says so, too.)

The outcomes are not counterfactually dependent on what you do, but they are stochastically dependent. So, you are deliberating with the following information, letting H! be a severe harm to everyone:

P(H!| 2b) = .999

P(H!|1b) = .111

So, two outcomes from choosing to 2b. First, you place everyone at a severe risk of serious harm. That alone is a serious wrong, and that increase in the chance of harm is counterfactually dependent on what you do. Second, the total suffering in 999/1000 worlds in which you 2b is extremely bad; in contrast, it is bad in 111/1000 of worlds in which you 1b. So, the actual occurrance of severe harm depends stochastically on what you do.

It would be interesting if this sort of stochastic dependence were irrelevant to moral choice. In that case God could act in ways that wildly increase the chances of suffering in the world (when he might not have done so) and never have done anything wrong.

But that's the question at issue: should we be guided by epistemic stochastic dependence or counterfactual dependence. So the moral case raises the stakes, but doesn't solve the problem.

The point illustrated is that stochastic dependence does matter in moral contexts. I'm not assuming it matters, I'm showing that it does. If you decide as a causal theorist would in moral newcombs then, for the reasons given, you'll act immorally.

### Archives

 Blog: Prosblogion Topics: Follow my blog