Let Egalitarian Universalism (EU) be the doctrine that God exists and gives everyone infinite happiness, and that the quantity fo this happiness is the same for everyone. The traditional formulation of Pascal’s Wager obviously does not work in the case of the God of EU. What is surprising, however, is that one can make Pascal’s Wager work even given the God of EU if one thinks that Bayesian decision theory, and hence one-boxing, is the right way to go in the case of Newcomb’s Paradox with a not quite perfect predictor (i.e., Nozick’s original formulation).
Here is how the trick works. Suppose that the only two epistemically available options are EU and atheism, and I need to decide whether or not to believe in God. Given Bayesian decision theory, I should choose whether to believe based on the conditional expected utilities. I need to calculate:
- U_{1}=rP(EU|believe) + aP(atheism|believe)
- U_{2}=rP(EU|~believe) + bP(atheism|~believe)
where r is the infinite positive reward that EU guarantees everybody, and a and b are the finite goods or bads of this life available if atheism is true. If U_{1} is greater than U_{2}, then I should believe.
We’ll need to use our favorite form of non-standard analysis for handling infinities. Observe that
- P(believe|EU)>P(believe|~EU),
since a God would be moderately to want people to believe in him, and hence it is somewhat more likely that there would be theistic belief if God existed than if atheism were true (and I assumed that atheism and EU are the only options). But then by Bayes’ Theorem it follows from (3) that:
- P(EU|believe)>P(EU|~believe).
Let c=P(EU|believe)-P(EU|~believe). By (4), c is a positive number. Then:
- U_{1}âU_{2}=rc + something finite.
Since r is infinite and positive, it follows that U_{1}âU_{2}>0, and hence U_{1}>U_{2}, so I should believe in EU.
The argument works on non-egalitarian universalism, too, as long as we don’t think God gives an infinitely greater reward to those who don’t believe in him.
(However, universalism is false and one-boxing is mistaken.)
In non-standard analysis, you can factor just as with ordinary numbers. So, rP(EU|believe)-rP(EU|~believe)=r(P(EU|believe)-P(EU|~believe)). If r is a positive infinity, and P(EU|believe)-P(EU|~believe) is a positive number, the product is positive.
There is, though, one minor gap in my argument. In (3), I need the left hand side to be bigger than the right hand side by more than just an infinitesimal. (I was assuming probabilities are standard numbers, in which case this follows from (3). But if probabilities are allowed to be non-standard numbers, then this needs to be stated as an explicit assumption.)
In non-standard analysis, you can factor just as with ordinary numbers. So, rP(EU|believe)-rP(EU|~believe)=r(P(EU|believe)-P(EU|~believe)). If r is a positive infinity, and P(EU|believe)-P(EU|~believe) is a positive number, the product is positive.
Right, of course, I see that this is part of your reasoning. But surely rP(EU|believe)-rP(EU|~believe) might be finite on non-standard analyses. I take it we agree about that. But then it must be true that r(P(EU|believe)-P(EU|~believe) might be finite. If the latter cannot be finite, then rP(EU|believe)-rP(EU|~believe) can’t be finite. But that can’t be right. Certainly on non-standard analyses I can subtract and infinite quantity from an infinite quantity and get a finite quantity. Really, this is the whole point of going non-standard.
rP(EU|believe)-rP(EU|~believe) can only be finite if the difference between P(EU|believe) and P(EU|~believe) is infinitesimal. Proof: Suppose rP(EU|believe)-rP(EU|~believe)=F, where F is finite. Then, dividing both sides by r, we get: P(EU|believe)-P(EU|~believe)=F/r. But if F is finite and r is infinite, then F/r is infinitesimal. QED
Right, unless of course F = 0. In any event, that doesn’t differ much from the standard account, except (presumably) that (non-standardly), [rP(EU|believe)-rP(EU|~believe)]
There are some interesting problems in the neighborhood of this one. As I mentioned, when the predictor is perfect (or the correlation is perfect) I’m pretty sure one-boxing is right. On the other hand, one-boxing is not so difficult to defend when the correlation is nearly perfect. Change the case slightly so that P in U1 is nearly perfect, say, .999, and P’ in U2 is about .10. Replace EU with EU* according to which your choice alone determines whether everyone enjoys eternal happiness or does not. Now it looks like you morally ought to dispose yourself to believe. It might be true that in some cases, rational decision is likely not to payoff for the rational agent. But it is hard to believe that a moral agent can act in ways that he knows will very likely cost others dearly when, at absolutely no moral cost, he can act in ways that will very likely benefit others greatly. But then do we conclude that moral decision is not rational?
It seems to me that if you go for one-boxing if the correlation is nearly perfect, and yet you are sure there is no backwards causation or anything like that, then you need to go for full-blown Bayesian decision theory, and then you get what I said, even without a nearly perfect predictor.
It seems to me that if you go for one-boxing if the correlation is nearly perfect, and yet you are sure there is no backwards causation or anything like that, then you need to go for full-blown Bayesian decision theory
No, I think two-boxing is right in many cases, but not obviously in all. In the moral case, which I am addressing here, it is hard to make sense of the causal view that rationality does not in general pay. Causal theorists just admit that they’re going to lose big in Newcomb problems, but they don’t see that as problematic for a theory of rationality. Rational behavior sometimes pulls apart from behavior that pays off, say they, contrary to Gauthier-types. I say, it might not be a problem for a theory of rationality, but it is a problem when you’re facing Newcomb problems with moral implications. So, it looks like causal theorists are committed to saying that acting morality isn’t rational. Maybe that’s a consequence you can live with.
“Rational behavior sometimes pulls apart from behavior that pays off, say they, contrary to Gauthier-types.”
Well, that much is trivially and uncontroversially true, say, in cases where someone has rigged the situation unbeknownst to the agent. Moreover, this formulation begs the question in that the causal theorist has a different account of “pays off” than the evidential decision theorist.
Now, maybe you mean this: in the long run, causal theorists do more poorly in Newcomb-type situations. Could be. But one can rig situations that work against anybody.
Suppose that I have the following invariable habit. Whenever anybody claims to have put me in a Newcomb situation, I flip an indeterministic fair coin in my brain to make my decision. Then (unless middle knowledge is possible) nobody can put me in a Newcomb situation.
So, if it’s possible to flip an indeterministic fair coin in one’s brain, there is no such thing as a universal close-to-perfect predictor of ehavior. Hence, close-to-perfect Newcomb-type predictors can only exist for certain kinds of people, namely those whose behavior is predictable.
So, some (possible) people are Newcomb-unpredictable. Some (possible) people are Newcomb-predictable. The invariable one-boxer and the invariable two-boxer are Newcomb-predictable. So here is the sense in which the one-boxer does better. If there is a Newcomber who grabs Newcomb-predictable people at random and puts them in a Newcomb situation, the one-boxer will do better dealing with him than the two-boxer.
So, if I believe there are Newcombers, I may have reason to become a one-boxer. But consider two other kinds of critters in the population: one-boxer-rewarder (1br) and two-boxer-rewarders (2br). Whenever a 1br comes across a one-boxer, she gives the one-boxer a million dollars, and when she comes across a two-boxer, she gives the two-boxer a whack with a stick. The 2br does the same but with “one-boxer” and “two-boxer” switched around. Now, assuming I have to be Newcomb-predictable (why assume that?), whether I should (in the prudential sense) be a one-boxer or a two-boxer depends on my beliefs as to the relative frequencies of Newcombers, 1brs and 2brs in the population. And that is an empirical question. And whether the causal theorist does worse in the long run depends on the answer to this question. If there are a lot more 2brs in the population than Newcombers and 1brs, then it’s better to be a causal theorist.
Now, you might say: The difference is that the Newcomber rewards behavior instances but the 1brs and 2brs reward behavior type. Rationality should be able to handle rewards of behavior better. So it’s only the Newcombers, not the 1brs and 2brs that count for evaluating which behavior type is better. But that’s mistaken, because the Newcomber’s actions depend not on the behavior instance but the behavior type. The Newcomber has to first check whether the victim’s behavior type is Newcomb-predictable.
As far as I can see, the points here are largely red herrings, and pretty easy for the evidential theorist to handle. There is middle knowledge, there are no unpredictable one or two boxers, etc. So there’s nothing to worry about here.
But, then, what about Moral Newcomb Problems (MNP)? A causal theorist in a MNP faces the decision to one-box or two-box where the payoff to everyone for one-boxing is a .999 chance at avoiding a very serious harm and the payoff to everyone of two-boxing is .999 of receiving a very serious harm.
Now in an MNP a causal theorist must (absurdly) choose to two-box, giving everyone a .999 chance at being very seriously harmed, when he could have given everyone a .999 chance of avoiding that harm. The recommendation to two-box here is close to a reductio. A causal theorist must otherwise conclude that moral action is not rational. Very bad news either way.
The use of the word “giving” in “giving everyone a .999 chance” implies a causality that is not present in the story. In fact, I think we don’t even have counterfactual dependence. It isn’t, I think, the case that had the causal theorist one-boxed, the better result would have taken place. That would be too much backtracking. So the two-boxer can two-box and say: “Had I one-boxed, something even worse would have happened. Of course, had I *been* a one-boxer, things would have been better in this situation. But things would have been worse in other situations, such as the situation of being in a world where a tyrant will destroy the human race if he can tell that you’d one-box. There is no set of dispositions that dominate in all circumstances.”
…it isn’t, I think, the case that had the causal theorist one-boxed, the better result would have taken place.
The fact is this.
One-box: we all have a 99.9% chance of avoiding a serious harm.
Two-Box: we all have a 99.9% chance of suffering a serious harm (&, say, you for certain get a minor benefit).
If you choose to two-box, you’ve done something seriously wrong. But causal decision theory tells you to two-box. Indeed, it will tell you that it’s also the best moral choice in this situation, and not merely the best rational choice.
But suppose I choose two-box. Then the following is still true: Had I chosen one-box, the serious harm would still have been 99.9% likely (but I wouldn’t have got the minor benefit), so I didn’t do anything wrong, since had I acted otherwise, things would have been worse for everyone. Why? Because the predictor is not counterfactually sensitive to my choice. Or so I assume.
Now if the predictor is counterfactually sensitive to my choice, then that’s a whole different game, and there one-box is the right answer. But such counterfactual sensitivity requires middle knowledge, prophecy or backwards causation.
Alex,
Play the game 1000 times in W. If you choose to two-box each time in W, it will be true that we all suffer 999 times (make it a billion or trillion times if you want to ensure the right frequencies of suffering). If you choose one-box in W, it will be true that we avoid suffering 999 times in W. Clearly you have acted in ways that increase not just the probability of suffering, but that increase the total amount of suffering in W. And you could have acted in ways that decreased the total suffering. This is a real problem for moral versions of newcomb. It is obvious that we should not act in ways that increase the chances of overall suffering.
I’m actually not sure why you’re arguing to the contrary. I don’t know a causal decision theorist who denies that, in Newcomb problems, causal decision theory is very costly (not just probably costly). You’re two-boxing is unwelcomed news because it tells you that our chances of suffering have just gone up severely. So don’t two-box.
I don’t see that you could have acted in ways that decreased total suffering. Had you one-boxed instead each time, the predictor would have been wrong each time, and you would have got a result that was even worse than you got by two-boxing. Do you agree about this counterfactual being true but dispute the consequences I draw from it, or do you dispute my claim that the counterfactual is true? (Except in the backwards causation, etc., cases, where of course you should one-box, and the causal theorist says so, too.)
The outcomes are not counterfactually dependent on what you do, but they are stochastically dependent. So, you are deliberating with the following information, letting H! be a severe harm to everyone:
P(H!| 2b) = .999
P(H!|1b) = .111
So, two outcomes from choosing to 2b. First, you place everyone at a severe risk of serious harm. That alone is a serious wrong, and that increase in the chance of harm is counterfactually dependent on what you do. Second, the total suffering in 999/1000 worlds in which you 2b is extremely bad; in contrast, it is bad in 111/1000 of worlds in which you 1b. So, the actual occurrance of severe harm depends stochastically on what you do.
It would be interesting if this sort of stochastic dependence were irrelevant to moral choice. In that case God could act in ways that wildly increase the chances of suffering in the world (when he might not have done so) and never have done anything wrong.
But that’s the question at issue: should we be guided by epistemic stochastic dependence or counterfactual dependence. So the moral case raises the stakes, but doesn’t solve the problem.
But that’s the question at issue: should we be guided by epistemic stochastic dependence or counterfactual dependence. So the moral case raises the stakes, but doesn’t solve the problem.
The point illustrated is that stochastic dependence does matter in moral contexts. I’m not assuming it matters, I’m showing that it does. If you decide as a causal theorist would in moral newcombs then, for the reasons given, you’ll act immorally.
I missed something you said. “That alone is a serious wrong, and that increase in the chance of harm is counterfactually dependent on what you do.”
I don’t see why the increase in the chance of harm is counterfactually dependent on what you do. I guess I’ll need the full case spelled out more clearly–what’s in the boxes, etc.
I left a similar comment over on your blog, Alex, but I’d be curious to know where this goes wrong.
1.U1=rP(EU|believe) + aP(atheism|believe)
2.U2=rP(EU|~believe) + bP(atheism|~believe)
Since we’re assuming non-standard infinities, we know that subtracting one infinite quantity from another might equal some finite quantity. So, suppose rP(EU|believe) = oo + n and let rP(EU|~believe) = oo. Suppose further that aP(atheism|believe) = m and bP(atheism|~believe) = s. So long as (s – m) > n, it looks like you ought not to dispose yourself to believe.