Pascalian Explosives
September 22, 2010 — 8:31

Author: Michael Almeida  Category: Uncategorized  Tags: ,   Comments: 28

In at least one version of the Wager, Pascal argues that a (prudentially) rational person will reason as follows.
V(Believing in God) = p(oo+) + (1-p)(F-) = oo+
V(Not Believing) = p(oo-) + (1-p)(F+) = oo-
So, it seems obvious that the value of believing in God exceeds the value of not believing. Whatever the evidential status of the proposition that God exists (assuming it does not express an impossibility, I suppose) it is prudent to believe.
I don’t want to rehearse the famliar objections to Pascal’s Wager but I do want to worry about some parodic reasoning. So, let’s agree that believing in God has an infinite positive payoff and not believing in God has an infinite negative payoff. Should I believe that God exists? I think the answer is no, and I think you think the answer is no.
What generates the problem I’m concerned with are imaginable worlds which have a non-zero (epistemic) probability of obtaining and which have a very high disvalue associated with them or a very high value associated with them. Because there are such imaginable worlds, it is not difficult to develop Pascalian-like situation in which one outome is the actualization of this imagined world. Notice that the worlds might be impossible, as is one world in Pascal’s Wager: viz., the world in which God exists is impossible or the world in which God does not exist is impossible. But each gets assigned some epistemic probability of being actual. To see the problem, consider this case. There is some non-zero probabiliy that some student has wired my car with an explosive. There is some non-zero probability that the explosive will be detonated when I open the car door. If there is such an explosive, then several lives will be lost, including mine. Should I check under the car? Let p be the probability that there is an explosive and (1 – p) the probability that there isn’t an explosive under the car.
V(Check) = p(F1+) + (1-p)(F2-) = F+
V(Don’t Check) = p(F3-)+ (1-p)(F4+) = F-
It seems clear that the value of checking is positive. There is some small cost in having to look under the car, but there’s a great benefit in checking and finding the explosive before opening the door. It seems clear that the value of not-checking is negative. If I don’t check and there is an explosive, then the worst outcome occurs, including the loss of lives. If I don’t check and there is no explosive, then there is a gain in convenience. Yet it is clearly nonsense to conclude that I should be checking for explosives every time I go to my car. Once you see the problem, you’ll notice all sorts of partitions that will include some extremely bad outcome with a non-zero probability of occurring. But none of these outcomes make it rational to act in hyper-cautious ways.
My conclusion: if you don’t already believe that God exists or think there’s a decent chance that God exists, then Pascal’s Wager gives you a good reason to believe God exists iff. it gives you a good reason to check under your car for explosives.

• But all other things being equal, doesn’t it give good reason, at least in the sense that it shows that you at least aren’t being irrational if you are hyper-cautious in this way (again, all other things being equal)? It’s just that not all other things are equal.
In other words, I think the answer to your irrationality concern is that you are changing the matrix mid-reasoning. After all, if we ask why we think it irrational to be hyper-cautious in this way, by the very structure of the decision matrix we can’t formulate an answer to the question simply in terms of what was available to the matrix to begin with; all of that shows that there is at least some positive value to checking. We have to appeal to other things to show that it is an irrational action. When we take these into account, though, we are essentially saying that we should actually have a different decision matrix.
And the point is relevant to the Wager, because Pascal’s Wager is really and truly all-other-things-being-equal; it’s explicitly set up that way. To make an analogous irrationality charge, you have to appeal to information that Pascal’s imaginary interlocutor doesn’t allow that we can know. And, indeed, one of Pascal’s aims, if I read him correctly, is to encourage his interlocutor, if nothing else, to recognize the value of looking for this sort of outside information. You may gamble on a poker hand even though you don’t know everything you need in order to be sure; but if the stakes are high the rational thing to do is to look for any tell you can find, i.e., any additional information that can shed light on whether your gamble is the rational thing not merely given your initial information but all information available to a diligent inquirer. And if that’s right, Pascal already has the Pascalian explosives problem in hand.

September 22, 2010 — 19:20
• Dustin Crummett

I’m not sure I follow. Suppose that if I don’t get blown up I will live fifty more years–that’s 1,576,800,000 seconds. Say checking will take 20 seconds–that’s 1/78,840,000 of my life. So, if I find a bomb, I receive a reward 78,840,000 times greater than the cost. But of course, the chance that my car will have a bomb under it any given time that I check will be much, much less than 1/78,840,000. And isn’t that why–if I am behaving rationally (and value all moments of my life equally)–I don’t check? But of course the same reasoning can’t be applied if you are dealing with infinite goods and infinite losses.

September 22, 2010 — 20:18
• Mike Almeida

but if the stakes are high the rational thing to do is to look for any tell you can find, i.e., any additional information that can shed light on whether your gamble is the rational thing not merely given your initial information but all information available to a diligent inquirer. And if that’s right, Pascal already has the Pascalian explosives problem in hand.
Brandon,
I’m not sure I follow you. In the explosives case, the stakes are really high, just as in the wager. What do I know in the explosives case that makes it rational not to check? I have a small probability for an extremely bad outcome, if I don’t check, and the expected value of not checking is seriously negative. The alternative is to check, whose expected value is positive. That’s enough to make it irrational not to check, unless I’m missing something.

September 23, 2010 — 9:01
• Mike Almeida

Suppose that if I don’t get blown up I will live fifty more years–that’s 1,576,800,000 seconds. Say checking will take 20 seconds–that’s 1/78,840,000 of my life. So, if I find a bomb, I receive a reward 78,840,000 times greater than the cost. But of course, the chance that my car will have a bomb under it any given time that I check will be much, much less than 1/78,840,000.
I don’t think the value of your life is the sum of the number of seconds you have left. Taking your life with one year left might be much worse than taking your life with two years left. Suppose your life is on balance bad. If I take your life with one year left, your life overall is valued negatively. If I take your life with two years left, it might be much less disvaluable. So it’s not the number of seconds you have left that matters.

September 23, 2010 — 9:17
• Edward T. Babinski

Brandon speaks of “all other things being equal” (are they?) and “hypercaution” (OCD? As opposed to what, hyperoblivion? Evolutionarily speaking, nature seems to have worked it out such that most of us fall somewhere inbetween).
Dustin speaks of the the meaning of life in terms of “seconds” left to live (time does not necessarily equate with meaning), and of “infinite goods” and “infinite evils” (as if either of us know what THEY are, or as if we could all be able to agree between us as to what exactly was “good” or “evil,” which have a wide range of meaning indeed, i.e., some say it’s “evil” to question whatever religious creed THEY happen to believe in, or that rock music is “evil”–seems less confusing to speak in terms of what causes suffering and what causes joy, which are less confusing and more universal concepts/categories, but only relatively speaking of course).
So many questions.
As for the idea of wagering on God, whatever does THAT mean? That God loves gamblers? God loves hypocrites who believe on him based on fear of the unknown? Can anyone truly believe in things that they simply don’t believe in? Which God? Which religion?
And if by “believe in God” one includes the necessity of a churchgoing life, is that sort of life so much more individually blessed than any other? And what of all the money and time spent in churchgoing that might have enriched ones life in other ways had that person never gone to church?

September 23, 2010 — 12:49
• Mike Almeida

As for the idea of wagering on God, whatever does THAT mean? That God loves gamblers?
Actually the question is whether it might be rational to cultivate the belief that God exists, even if you think the evidence for God’s existence is not so strong. There are at least two kinds reasons one might advance in favor of believing a proposition p, reasons frmo practical rationality and reasons from theoretical rationality (insofar as these can be separated these days).

September 23, 2010 — 12:56
• Dustin Crummett

I don’t think the value of your life is the sum of the number of seconds you have left. Taking your life with one year left might be much worse than taking your life with two years left. Suppose your life is on balance bad. If I take your life with one year left, your life overall is valued negatively. If I take your life with two years left, it might be much less disvaluable. So it’s not the number of seconds you have left that matters.
If my life is bad for me, though, getting blown up wouldn’t be bad for me.
I don’t actually think counting seconds is a very good way to determine how valuable your life is going to be, but it seems like the best way to quantify this sort of thing for the purposes of the argument. In fact, I am young and foolish and accordingly value close things more than far away things and time in youth more than time in old age, so in terms of how I actually make decisions I probably value the twenty seconds now extremely disproportionally highly to how I value twenty seconds I might live when I am, you know, sixty or whatever. But then I have even less reason to check.

September 23, 2010 — 13:24
• Dustin Crummett

(time does not necessarily equate with meaning)
I didn’t think I said it did…
and of “infinite goods” and “infinite evils” (as if either of us know what THEY are, or as if we could all be able to agree between us as to what exactly was “good” or “evil,”
Well, Pascal thinks the alternatives are eternal bliss or eternal torture. It seems like we can probably agree being blissful forever is better than being tortured forever?
Can anyone truly believe in things that they simply don’t believe in?
Pascal addresses this much. I’ll let you figure out what he said.
And if by “believe in God” one includes the necessity of a churchgoing life, is that sort of life so much more individually blessed than any other? And what of all the money and time spent in churchgoing that might have enriched ones life in other ways had that person never gone to church?
Pascal actually seems to grant that a churchgoing earthly life will be, in itself, less valuable to you than a secular earthly life. If we think otherwise, we have a pragmatic reason to be religious without introducing talk of the afterlife.

September 23, 2010 — 13:34
• Mike Almeida

I don’t actually think counting seconds is a very good way to determine how valuable your life is going to be, but it seems like the best way to quantify this sort of thing for the purposes of the argument.
No, it’s just misleading. You seem to want to argue that it is not worth checking under your car for an explosive despite the fact that (i) there might be one and (ii) it would be extremely bad if it were there. You’re argument is that the probability is low and the cost is too high. But that can’t be right. People go for physicals when it takes a few hours to get through it and the probablity of anything being life-threatening is close to zero. But even that doesn’t matter, for we can always make the outcomes worse. Consider a world in whcih there is a thermonuclear bomb that turning the key to your car door will detonate. In a decent sized city, it will kill a million or so. It’s a horrible outcome with a non-zero probability of occurring. Now suppose it is true in this world that all you need to do to defuse the bomb is recite the first four letters of the alphabet backwards. Not difficult or time-consuming, so no genuine cost. It is still not rational to do it.

September 23, 2010 — 13:37
• Dustin Crummett

People go for physicals when it takes a few hours to get through it and the probablity of anything being life-threatening is close to zero.
It’s close to zero, but it’s still much greater than the probability of my car having a bomb under it any given time that I check. Probably millions or billions of times greater.
But even that doesn’t matter, for we can always make the outcomes worse.
But that makes the probabilities even lower. Surely the probability of there being a nuclear bomb attached to my car lock that can be defused by reciting the first four letters of the alphabet on any given occasion when I get into my car is too low to even meaningfully calculate. I mean, one in fifty quintillion or something.

September 23, 2010 — 13:47
• Mike Almeida

But that makes the probabilities even lower
It is not obvious how low it makes the probabilities. We’re talking about epistemic probability in the Wager. And low probabilities don’t matter if the outcome is sufficiently bad.
Surely the probability of there being a nuclear bomb attached to my car lock that can be defused by reciting the first four letters of the alphabet on any given occasion when I get into my car is too low to even meaningfully calculate. I mean, one in fifty quintillion or something
I have no idea what you mean by ‘calculate’. These are epistemic probabilities, so there is no calculation you might make that would interestingly affect my probability function. Sorry to be tedious in repeating the formula, but consider a world in which the outcome is about as bad as it can get. It does not matter much how low the probability is that such a world obtains. If that does not work for you, disjoin that world to as many disjoint and horrible worlds as necessary to get the probability high enough for you. Certainly there are infinitely many of them, so no worries in finding enough. Keep in mind that they need only be epistemically possible. It could well turn out that many of the are impossible (as in fact happens in Pascal’s Wager).

September 23, 2010 — 14:00
• Dustin Crummett

It is not obvious how low it makes the probabilities. We’re talking about epistemic probability in the Wager. And low probabilities don’t matter if the outcome is sufficiently bad.
I think I agree with all that.
Certainly there are infinitely many of them, so no worries in finding enough.
Admittedly, I dropped calculus after about two weeks to take guitar, but if there are infinitely many of them, each has an epistemic probability of zero, doesn’t it? I don’t quite understand how you–is assign a better word?–probabilities in a case like that, but I do know I think the probability of the actual world being a world from that set is vanishingly small.
What justification do I have for thinking this? Well, it seems like I have, at the very least, some sort of inductive grounds–so far, things have turned out better than if people actually had been doing absurd things all the time on the off chance that the absurdity would prevent some absurd catastrophe. And it seems like, how do I know that will continue to hold, is just the problem of induction.

September 23, 2010 — 14:35
• Mike Almeida

Admittedly, I dropped calculus after about two weeks to take guitar, but if there are infinitely many of them, each has an epistemic probability of zero, doesn’t it?
That’s hard to know. Suppose each has an infinitessimal probability of obtaining, then the probability that one of them obtains is not especially low (it depends on the infinitessimals which needn’t sum to some very low or very high probability). But suppose that some have zero probability, some have an extremely low probability, some have infinitessimal probability. The disjunction of them again might not be very low.
but I do know I think the probability of the actual world being a world from that set is vanishingly small.
Small, but not vanishingly small. But that’s enough to generate the problem.

September 23, 2010 — 14:46
• Dustin Crummett

Small, but not vanishingly small. But that’s enough to generate the problem.
Well, I don’t know, I will think about it. The idea that I shouldn’t assign a vanishingly small probability to one of the absurd catastrophe worlds being actual strikes me as so weird, though, that in truth I probably wouldn’t accept any philosophical argument for it–I would just figure I had some reason I couldn’t articulate.
That said, I do think Pascal’s wager isn’t a very good argument. I just think it’s bad for other reasons.

September 23, 2010 — 15:12
• Actually, it’s not a bad idea to look under a car, to see if any geckos are hiding just behind or in front of the wheels, so as to avoid running them over as one drives away. 🙂
Geckos apart (I really like geckos!), if your epistemic probability of there being a bomb under the car that would kill n people is p, and if the average disutility of a death is D, the disutility of checking is C, and the probability of finding a bomb if one checks is q, and all other things are equal, then I see nothing absurd about the idea that you should check if pqnD>C. In fact, it seems obvious that in such cases you should check. (I would be inclined to say that the fact that you don’t take it to be rational to check is some reason to think that your credences and utility assignments are not such that pqnD>C.)
However, one thing to consider is that if we’re looking at worlds that are rather different from ours, then we need to take into account the possibility that other things might be rather different in those worlds. Thus, corresponding to worlds where there are nuclear bombs planted under professors’ car, there are worlds where an irate student would detonate a nuclear bomb if you arrived a second later (if you arrive earlier, you’ll be able to convince him not to). I don’t know how to weigh the two probabilities against each other–my credences for these distant possibilities just aren’t determinate enough. So the rational thing to do is to dismiss them both.

September 23, 2010 — 15:15
• Or take the world where if you had an extra second to think about things, you’d come up with an argument that would convince everybody to “just get along”, and world peace will ensue. By spending that second looking under the car, you’ve robbed the world of eternal peace. Is this likely? No. But neither is the nuclear bomb.
This is basically the other-gods objection in the Pascal case. But in the Pascal case it can be handled, because the theistic proposal has a simplicity and explanatory power that sets it apart.

September 23, 2010 — 15:17
• Mike,
What do I know in the explosives case that makes it rational not to check? I have a small probability for an extremely bad outcome, if I don’t check, and the expected value of not checking is seriously negative. The alternative is to check, whose expected value is positive. That’s enough to make it irrational not to check, unless I’m missing something.
Exactly right — given the initial information and alternatives and nothing other than that initial information and those alternatives. But Pascal makes the (correct) assumption that in real life our decision-relevant information is not static (and thus that a reasonable decision will take this into account) and he makes the (also correct) assumption that in the sort of case he is considering practical reasons supplement theoretical reasons rather than replace them or trump them. The Wager only starts the way it does because Pascal is imagining an interlocutor who holds that there are no relevant theoretical reasons tending one way or another. Pascal himself, of course, does not agree with the interlocutor and, indeed, one of the things he explcitly uses the Wager arguments for is to argue that the interlocutor should continue looking for relevant theoretical reasons.
In order to make the explosives case and the Wager parallel we have to assume that there really is nothing relevant to the latter decision but the bare assessments of probability and utility. Pascal himself does not make that assumption; but if we do make that assumption, then the parallel favors the Wager: in the explosives cases, it is not, in fact, irrational on that assumption to be hyper-cautious. If any of us were actually in a position remotely like that found in the set-up, it would be exactly the reasonable thing to do. It’s just that none of us are: none of us are in the position of having no information beyond what shows up in the decision matrices in the post. But Pascal really knows people, or at least really thinks he knows people, who think that we are in something like that position in the theistic case. So I don’t see that the parodic reasoning actually parodies anything.
Ed,
I’m not sure what you mean; ‘hypercautious’ was taken from the post, not a term I came up with, and the meaning of the term is not difficult to figure out in that original context. Likewise I explicitly denied that all other things were equal.

September 24, 2010 — 1:32
• Mike Almeida

It’s just that none of us are: none of us are in the position of having no information beyond what shows up in the decision matrices in the post.
Ok, but what is that information you’re alluding to? I agree that Pascal brackets theoretical reasons for God’s existence. But in the case I describe, I have no bracketed any information. Or, I’m not aware of having done so.

September 24, 2010 — 14:47
• Mike Almeida

if your epistemic probability of there being a bomb under the car that would kill n people is p, and if the average disutility of a death is D, the disutility of checking is C, and the probability of finding a bomb if one checks is q, and all other things are equal, then I see nothing absurd about the idea that you should check if pqnD>C.
I think this is mistaken. The problem I’m describing is akin to the problem described in the St. Petersburg paradox. In the paradox, as you of course know, your expected utility for playing is .5(\$1) + .25(\$2)+ .125(\$4) + .0625(\$8) + …+ 1/n(\$n/2)= \$oo+. The expected utility is greater than any finite cost C, so I should be willing to pay any finite amount to play. But of course I wouldn’t pay so much as \$20 to play. Neither would you. Now suppose all you had to do t owin was cultivate the belief that God exists. And suppose the infinite payoff was communion with God. For the same reasons, I again would find it irrational to cultivate the belief, though the payoff is infinite. It’s not worth the \$20 and it is not worth cultivating the belief. Similarly, I would not check under my car for an explosive, no matter how many worlds W I can imagine which have some non-zero probability of being actualized, should I not look under the car, and are very bad.

September 25, 2010 — 13:54
• Hi, Mike,
There are lots of things that we know in practice that aren’t represented. For instance, we all know that ‘check’ and ‘don’t check’ actually are several different categories; e.g., I know very well that if I were to check and it weren’t thoroughly obvious that the car were rigged to blow, I wouldn’t have any clue how to tell, and so know quite well that I could check and still be blown up. We also have information on the meta-reasoning level: I can see that parity would require reasoning in the same way about an immense number of things (opening the door to your house; opening the door to your office, opening new passages, &c.) and I can see that this, with high probability, would be seriously deleterious.
We also know that we are not omniscient, and that, therefore, any assessment of probabilities and utilities we actually make is necessarily tentative and subject to revision in light of new information, and the range of this is extensive. For instance, it could be that one of my students is in the Mafia and has put up on his Facebook page that he’s going to blow me up; that would revise the probability upwards. On the other hand, new information about my students may revise down my probabilities until for practical purposes I can’t really distinguish it from zero. (Since perfect precision is in practice impossible, there are positive probabilities I can’t actually distinguish from zero probabilities, because they are within the margin of error caused by imprecision.) I’m sure there are other things that could be added.

September 26, 2010 — 9:46
• Mike Almeida

I know very well that if I were to check and it weren’t thoroughly obvious that the car were rigged to blow, I wouldn’t have any clue how to tell, and so know quite well that I could check and still be blown up. We also have information on the meta-reasoning level: I can see that parity would require reasoning in the same way about an immense number of things (opening the door to your house; opening the door to your office, opening new passages, &c.) and I can see that this, with high probability, would be seriously deleterious.
But these are things for which there are analogous worries in the Wager. I know believe, don’t believe are not the only options. I have no idea how much credence I have to place in God to ensure I get the best payoff. I know that there are many other situations (other than the Wager) in which parallel reasoning would lead me to odd decisions, etc. There is information I might receive that would have me revise my probabilities up or down in the Wager, too.
What I need ot know is whether there is information you are in possession of in the explosives case for which there is no parallel information in the Wager and because of which you know you needn’t check under the car.

September 26, 2010 — 15:13
• Whether or not analogous worries exist for the Wager depends heavily on how restricted or unrestricted one takes its aim to be; I think you take the part summarized by the decision matrix to be relatively unrestricted in aim, whereas I take it to be simply a response to a very specific type of position. In context, it’s a response to someone who says that Christians, and Catholics in particular, are irrational not to suspend judgment given that there is no real evidence one way or another; this restricts the number of choices on the table already, and thus the multiple choice analogue does not arise.
Likewise, I take part of the point simply to be an encouragement to inquire further. It’s meant to shake the complacency of those who think that because they do not already see the evidence for X that they can therefore simply dismiss X; I don’t see it’s point as being to tell you definitively what you should do. You’d only get a parallel to this if the explosives case were (e.g.) only intended to get you wondering if there are any safeguards and regulations in place to protect people from such explosions, and, if not, whether there was a reasonable and available way to remedy it. But once you start tailoring the explosives case in this way, the argument gives a more and more modest conclusion and becomes more and more obviously reasonable. When people use Wager-style arguments for the claim that we should do something about global warming, people may dispute about whether there are in fact any reasonable and available options; except when being curmudgeonly they don’t really dispute the idea that the wager would show that we should at least take the matter seriously enough to look into it further.
But even taken unrestrictedly I doubt that there is any real analogue on the meta-reasoning point. It’s simply not enough to recognize that “there are many other situations (other than the Wager) in which parallel reasoning would lead me to odd decisions”; this will be true of virtually all reasoning for decisions. If I make a choice of ice cream by saying eeny-meeny-miny-mo, it’s a misapplication of parity to say that there are many other situations in which parallel reasoning would lead me to odd decisions; e.g., picking what grade to give a paper. But the reason for this is obvious enough: decisions are sensitive to the kind of circumstances involved, where those circumstances affect the nature of the decision. It is not merely the reasoning that needs to be parallel: the structure of the decisions has to be parallel as well. Likewise, the decisions would need to be not merely odd; they need to be deleterious or clearly detrimental to life, and the merely quixotic would not necessarily be detrimental in this way. Because the reasoning in the explosives case carries over to anything that could be rigged to blow on being opened or started or closed (since it is the mere fact of a non-zero probability of rigged explosion that plays a role in the reasoning) it clearly would be detrimental to life: this morning I would have had to check my alarm clock before turning it off, check my bathroom door before opening it, check my computer before turning it on, check my refrigerator before opening the door, check my microwave before opening the door or pressing ‘start’, and I wouldn’t have even gotten out of the house yet. There is a non-zero probability that someone came quietly into the house at night and rigged any of these with explosives. But where is the parallel to this in the Wager case? I don’t see it.

September 26, 2010 — 16:44
• Mike Almeida

I guess I’m lost wrt how the differences make a difference. That is, I thought we were talking about differences in the exposion case that would undermine the case. Differences that would make it rational not to check under the car.
What I’m trying to underscore about the Wager is that, to many people, Pascal has just dreamed up a possible outcome that’s really terrible and one that’s really good, and he’s urged that they have non-zero epistemic probability. And he tells us that these should figure in our decision making. Many find that odd; I do too. Now clearly, what Pascal describes might be a complete impossiblity. There may well be no world at all in which there is anything like the payoff matrix he describes. I find that observation cogent. I can dream up scenarios too, scenarios that have a non-zero epistemic probability. And I can dream up an extreme payoff matrix. It’s not hard. My point is that, once we start doing this, and once we make these outcomes relevant to decision making, it’s purely a technical matter (puttering around with the details) to get the bizarre recommendation that I ought to check under my car for explosives. There is no principled problem here to getting this bizarre result; you just need the right description of the options and the right description of the partition of possible outcomes. But we should not let these dreamy outcomes affect our decision making. There should be restrictions on outcomes that can play a role in determining what it is rational to do.

September 26, 2010 — 18:59
• Mike:
I wonder if the St Petersburg paradox plays out differently if you’re risk averse.
In any case, I think this is more like an upside-down St Petersburg paradox. If you don’t pay \$20, you will have a 1/2 chance of a nanosecond of pain, a 1/4 chance of two nanoseconds of pain, a 1/8 chance of four nanoseconds of pain, etc. I don’t know what to say about this.

September 27, 2010 — 8:42
• Mike Almeida

There is that way to formulate the SPP, but you can also formulate it as increasing your chances at eternal communion with God. So take the payoff .5(\$1) + .25(\$2)+ .125(\$4) + .0625(\$8) + …+ 1/n(\$n/2)= \$oo+, where you let \$1 be equal to the value of some infinitessimal chance 1/oo of eternal communion oo+. Let \$2 = some larger infinitessimal chance +1/oo of eternal communion with God oo+. And so on upward. Though the expected value of the game is eternal communion with God–and you should be willing to pay any amount to play–you wouldn’t pay much to play or anything at all. I think I’d pay nothing.

September 27, 2010 — 13:58
• Hi, Mike,
I guess that makes sense. I suppose I have difficulty seeing how anything said about the explosives case can be carried over to the Wager without major changes; part of the point Pascal makes later in his arguments is that the sort of life lived by the Wager is actually pretty normal: everything changes, but it’s still recognizably a livable life. If we’re really taking the explosives case seriously, I think that ends up being a very important asymmetry. The obvious reason we shouldn’t bother with dreamy alternatives is that they’re not really livable, and don’t see what other reason we could have for rejecting them, short of having direct evidence (theoretical reasons to believe, to use the phrase I used before) that they’re definitely false.
Because it’s possible that this might be leading to talking past each other, I should say that, although it doesn’t change anything to do with your argument, that strictly speaking, Pascal never says that anything really bad happens to you if you don’t believe — all of the explicit Wager formulations we have in the Pensees actually don’t assume that anything bad happens to you if you don’t believe, beyond possibly missing out on good things. The extremely-bad-things version is really due to Arnauld and Nicole in the Port-Royal Logic. That version might be directly from Pascal (it certainly was indirectly); but it might not. Because the Port-Royal Logic was published before the Pensees, that version of the Wager became the one that was most widely recognized. And although the result is actually a mutated Wager, it’s pretty common to treat this as ‘the Wager’, and it’s an interesting argument in its own right, so I’ve been assuming that we’re concerned with the mutated version — i.e., Pascal’s argument as it would be if it had the mutation, rather than the Port-Royal version only or Pascal’s real Wager. It occurs to me that it’s possible that this might have led to some confusion in the above thread; you might have actually intended to consider only the Port-Royal version on its own rather than the form it would have to take if inserted in Pascal’s actual Wager. That would make most of my objections above beside the point. (But I still am not sure how we can make an adequate bridge between the explosives case and the Port-Royal version, simply on its own, for the same reason given above.)

September 27, 2010 — 20:47
• Jason S.

It also is a live possibility that God exists, but there exists great negative consequences for cultivating belief in him. (Perhaps God regards belief in him as irrational and he dislikes irrational followers.) It’s also possible that God exists, but there exists great negative consequences for cultivating belief in the wrong concept of him (a very real idea in some religious views). In order to incorporate this in the analogy, the act of looking itself also has to have some possibility of triggering the explosives.

October 3, 2010 — 11:05
• Mike Almeida

In order to incorporate this in the analogy, the act of looking itself also has to have some possibility of triggering the explosives.
I don’t think this would importantly alter the problem. It cannot be true that I should not look under the case because it might set off some explosives. That cannot be the reason not to look.

October 4, 2010 — 7:49