Being better off than the God of Open Theism
April 21, 2007 — 21:31

Author: Alexander Pruss  Category: Open Theism  Tags: , , ,   Comments: 19

Suppose Open Theism (OT) holds. There is a possible world where a demon is better off doxastically than God in respect of future free actions of creatures. On some views of inductive knowledge, there is a possible world where a demon is better off epistemically than God in respect of future free actions of creatures. Hence OT is false. This may be all old hat. But, hey, it's fun to reinvent the wheel–the thrill of discovery, of seeing it roll, etc. Now I need to argue for my above claims.


Argument. Suppose OT is possible. Then:

  1. There is an infinite set W1 of worlds w which contain the God of OT, and many free creatures that perform many free choices, and exactly one very big book, in Chinese, residing on Pluto, where that book purports to describe the complete history of all the free actions of creatures over all the time during which free actions are done.
  2. There is an infinite subset W2 of W1 consisting of worlds such that every sentence of that book either ends up being true.
  3. There is an infinite subset W3 of W2 consisting of worlds w such that a logically consistent and very smart demon has got hold of the book before most of the free actions of creatures in w were performed, where that demon was instantly convinced that every sentence of the book is true, and where that demon formed no beliefs about the free actions of creatures besides the claims in the book.

Let w be a world from W3, let D be such a demon, and let B be the book. Then D has correct beliefs about all future free creaturely actions, and these beliefs fully specify all future free creaturely actions. Now the God of OT in w only has probabilistic beliefs abouts these actions. These are all true, but the true beliefs of D about these actions are much more specific. Then, in w, D is doxastically better off than God vis-a-vis future free creaturely actions. He has true beliefs about many specific things about which God has only probabilistic beliefs.

It does not follow that D is epistemically better off than God is. Indeed, it seems as if D does not know any of these true propositions in the future. But it is theologically bad enough that D is doxastically better off than God, especially if it is in an area where the tradition holds that God's knowledge is paradigmatically impressive (think of how impressed we are at correct prophecy). And, as Socrates notes, all we need to successfully guide our actions is true belief, not knowledge. The God of OT will be deeply surprized at D's uncanny correctness of beliefs about future free action, and will have been outdone doxastically by D in this respect. The possibility of outdoing God doxastically in such a total and impressive way is deeply repugnant theologically. Hence OT is false.

But maybe D's unreasonableness in accepting B makes him doxastically poorly off. I am not sure. The telos of the doxastic faculty is grasp of the truth. One is worse as an epistemic agent in being unreasonable, but if the unreasonableness actually contributes to the telos of the doxastic faculty, this unreasonableness does not decrease doxastic welfare. There are circumstances where by being imprudent one will end up better off, and there are circumstances where by being unreasonable one will end up better off doxastically.

Now suppose that D gets the book early on in the world's career, and instead of believing it is right from the outset, he carefully compares the book's predictions to actual events, and finds they match. He forms an inductively justified belief that all of the book's predictions of actual events are true. The evidence is incredibly impressive here. Now, D is no longer unreasonable in accepting the predictions. In fact, he would be unreasonable not to. So we can no longer excoriate D for his intellectual vices.

Moreover, arguably, this is knowledge. It is a plausible claim about induction that if all Fs hitherto observed have property P, and if all future Fs will in fact end up having P, and if the class of Fs is not gerrymandered, and if there is no undefeated defeater available, then one knows that all future Fs will end up having P. Here let the Fs be free creaturely actions, and let P be the property of having been correctly predicted by the book B.

One
might object that the demon has a defeater for his belief in the reliability of B. For the arguments for OT, if sound, will establish that the future is inherently contingent and essentially unpredictable, so that if B has been right so far, it has been right only by a fluke, and there is no reason to suppose the flukish rightness will continue.

However, the demon may have never thought of these arguments for OT. Moreover, the demon has a defeater for these arguments. The arguments for OT contain controversial premises. These premises may be plausible (I don't think so myself) but they are not indisputable. If these controversial premises are true, then the success of B so far is incredibly improbable if, say, a billion described well-balanced choices have been verified (as we may suppose; a choice is "well-balanced" if the agent has reasons and desires well-balanced on both sides; I am simplifying some inessential technical issues here). This success gives good reason to reject at least one of the controversial premises. If the plausible claim about induction holds, then D knows that B is always going to end up being right. And so D is epistemically better off than the God of OT.

That D should be epistemically better off than God in connection with future free action is theologically repugnant. (Query: Why can't the God of OT use the same inductive argument as D does? Because the God of OT unshakably knows the premises of the arguments for OT to be true.)

Comments:
  • Philip

    Hey Alexander,
    I think a lot of open theists would deny that D has true beliefs. They could say that propositions which state that future contingent events will occur either do not have truth value or are all false. Either way, D’s beliefs about the future would be false.

    April 22, 2007 — 3:52
  • Philip:
    Yes, this is a good point. For simplicity, the above formulation assumed that there are facts about future events.
    However, at a couple of times in the post, instead of saying that D’s beliefs are true, I said they will end up true. I did this to hint at this issue.
    To really get around the issue, the whole construction needs to be run in hindsight. Suppose t1 < t2 < t3. It is now t3. At t1, D believed that Jones would mow the lawn. At t2, Jones mowed the lawn. Stipulation: If things are as I have described, then it is true to say that Jones’ belief at t1 ended up being true. The argument shows that it is possible that D’s earlier beliefs about the then-future (but now past) free creaturely actions ended up true. This is a claim that can only be true from hindsight, but this claim should by itself be damaging to OT, since it implies that we can correctly say in hindsight that D’s doxastic states were more precise than God’s and ended up correct–which is, intuitively, all that really matters for them.
    More precisely, we need to relativize being doxastically better off to pairs of times. x is doxastically better off thatn y with respect to a proposition p relative to (t1,t3) provided that it is true at t3 that either p is true and x believed at t1 that p is true while y did not believe at t1 that p is true or p is false and x believed at t1 that p is false while y did not believe at t1 that p is false.

    April 22, 2007 — 6:57
  • I’m not seeing how the demon is doxastically better off than God in this case.
    Consider a smaller case. X and Y want to bet on the outcome of a horse race. X chooses horse H1 to win and Y chooses horse H2 to win. By sheer chance, H1 wins. Would you say that X doxastically better off than Y was? Was X somehow epistemically more impressive? Hardly. X was no better positioned epistemically than Y was–the only difference is that X’s guess was luckier. What is impressive is the luck, not the epistemology. The same thing holds with the demon. The demon is just phenomenally lucky. Even after he possesses the book, the probability that the book accurately describes the entire future approximates zero. What is impressive about the demon is his incredible luck–that’s about it. What’s worse, the demon is thoroughly irrational in believing that the contents of the book describe the future. No epistemically ideal agent would acquire such a belief.

    April 22, 2007 — 9:33
  • Mike:
    I am distinguishing between doxastic and epistemic welfare. Our doxastic telos is true belief. Our epistemic telos is knowledge. Doxastic welfare, then, has to do with how many of one’s beliefs are in fact correct. (I personally care a lot more about truth than about knowledge. If God offered to implant in me a lots and lots of true beliefs, I’d be happy to go for that even if they didn’t count as knowledge.)
    Yes, D is just very, very lucky. Sure. He’s won the doxastic lottery in regard to future free actions, and he got a lot of doxastic welfare in the lottery. D is better off doxastically than the God of OT, since the God of OT needs to get to his beliefs by the sweat of his brow.
    However, take the second half of the argument, the inductive half. D has an extremely impressive inductive argument for thinking the book is right. He certainly need not be irrational in doing so. (If someone kept on giving you correct tips on horses, you would be quite rational to assume that future tips from him would also be corret.) So D is not irrational. (And maybe, just maybe, he actually knows by induction.)

    April 22, 2007 — 22:38
  • No, it is irrational to believe what is in the book as D does. Compare there being three books to choose from, B1-B3. D’s justification for his belief in B1 (rather than B2 or B3 or just guessing, each of which is as reasonable as the rest) comes after his irrational decision to believe it.
    Whether true belef is a good as knowledge is a wide-open question, as I’m sure you know (ask Jon, certainly). But I guess I don’t see why D has achieved even the doxastic goal of true belief. D believes what is in the book. What’s in the book is not true. As pure chance would have it, it turns out true.

    April 23, 2007 — 7:26
  • Alex,
    It’s a very interesting question whether D is ever justified in believing the book. By hypothesis, the propositions in the book are all true (or will be true) by chance alone. Suppose instead that every time D wanted to know whether some proposition P was true, he flipped a fair coin. “Heads” for true and “Tails” for false. Suppose there’s a world in which the flips by sheer chance predict correctly every time. The coin is by hypothesis perfectly fair. So no matter how many times it is correct, D should not put any more credence than .5 that it will be correct on the next flip. That is, he is not justified in believing it will be right on the next flip, and even less justified in believing it will be right on the next 10 flips. Now suppose that it was by flipping a fair coin that, for each proposition P, either P or ~P was entered into the book. No matter how many times the book has been right, D is not justified in believing its next prediction.

    April 23, 2007 — 7:48
  • Mike:
    Suppose that I indeed find that I have a die which is an excellent predictor of the stock market. To find out by how much a stock will change, I write the stock symbol on a piece of paper and toss my die on it. If it says 2, the stock will stay roughly constant between now and a year from now. If it says 3-6 the stock will go up: a little if it’s 3, more if it’s 4, more if it’s 5, and a lot of it’s 6. If it says 1, the stock will go down.
    I observe this happening over and over. I think I am justified in concluding that for reasons I cannot understand, somehow the die toss is an effective predictor of the stock market. I will not know how the die works. Suppose I do this a thousand times. The chances of getting it right by chance are very, very tiny. I am going to rationally conclude that something is fishy.
    There are many possibilities:

    • There is a causal connection I am unaware of that makes future stock market events be caused by my die tosses (maybe some influential financial players are snooping on my die and investing accordingly).
    • Stock market events are in fact deterministic, and there is some common cause of stock market events and of die toss results.
    • Open Theism is false, and God is making the die come out right for my benefit.

    It seems clear to me that it would be rational for me to accept that some such explanation, or maybe one beyond our ken, holds.
    But suppose the correlation in fact comes from pure chance. Then my belief that the next die toss will be predictive of the stock market will be a justified true belief.
    Will it count as knowledge? Myself, I suspect that it will not. However, it will be a rational belief and it will be true.

    April 23, 2007 — 10:40
  • I will not know how the die works. Suppose I do this a thousand times.
    No, this is exactly what we cannot do. We cannot assume there is some reason why the die is successful. By hypothesis, there is no reason why is successful–indeed, there cannot be a reason why it is successful. The most important hypothesis in the example is that it is by chance alone that the die manages to correctly predict. Otherwise, if we remove that assumption, we are smuggling in the assumption that there is something to know about the future and some way to know it. But there is nothing to know about the future and, therefore, there is no way to know it. It is crucial to the situation that there is no mysterious fact that explains the success of the die. It is pure, sheer, chance that it is successful. This is why it cannot be used to justify your beliefs.

    April 23, 2007 — 14:08
  • But suppose the correlation in fact comes from pure chance. Then my belief that the next die toss will be predictive of the stock market will be a justified true belief.
    Reconsider the coin example. It is by hypothesis a fair coin. On every occasion, it gives you a .5 chance of being correct, if you believe what it predicts–no matter how many times it has been correct. That is the hypothesis of the case. The only way you could be justified in believing it is correct on its current prediction is to assume that it’s accuracy now exceeds .5. But that clearly violates the hypothesis.

    April 23, 2007 — 14:14
  • Mike:
    I fear you’re going from what cannot be true to what cannot be thought to be true. If OT is true, then contingent future events cannot be reliably predicted. But even if OT is true, someone can rationally think that contingent future events can be reliably predicted.
    In the coin example, I might have a justified false belief that the coin flips are not random, or that there is some odd causal connection, or that OT is false, or that the choices are not really free, etc. Or I could simply suspend judgment over how it in fact works, and just believe that it works. Given that I have such strong inductive data that it works, this seems justified.
    Note also that the controversial premises in arguments for an open future can be reasonably thought to have a non-zero epistemic probability. In fact, reasonable people can think that this probability is at least 0.001. I think we should agree on this. Well, so let’s suppose that D assigns probability 0.999 to the open future hypothesis based on the evidence. But then he observes a hundred choices going exactly as predicted. Let’s suppose these are equiprobable, independent binary choices, for simplicity. Then the probability of getting things right ahead of time given open future is 2^-1000. This is so tiny that D has extremely good reason to downgrade his probability assignment to the open future hypothesis.

    April 23, 2007 — 20:54
  • If OT is true, then contingent future events cannot be reliably predicted. But even if OT is true, someone can rationally think that contingent future events can be reliably predicted.
    Oh, this is fine under the assumption that D does not know that there are no facts about the future. But if D knows that OT is true (and I was assuming he did) then the coin flipping cannot justify his beliefs about the future. If he does not know, then his epistemic probability might be positively affected by the coin flips.

    April 24, 2007 — 16:54
  • Mike:
    Knowing p is compatible with assigning a probability less than 1 to p. So even if D knew there are no facts about the future, the evidence might well make it reasonable for him to change his mind. (I guess I’m saying that the Socratic unshakability criterion for knowledge is incorrect.)

    April 24, 2007 — 22:54
  • Alex,
    We’re at cross purposes, I think. I wasn’t assuming that knowledge required certainty. My argument was this: if D knows that C is a fair coin and that C is being used to predict the future, then the fact that those coin-flips have been correct consecutively (say, thousands of times) is pure chance and does not provide you with any justification for believing that, on the next flip, it will predict correctly. It will not provide justification unless (contrary to a central hypothesis of the case) you assume that the coin is not fair, or the game has been fixed, etc. So fixing all of the assumptions of the case (including that the coin is fair, the game is not fixed, the coin has predicted correctly by sheer chance, etc.), you now consider the next flip of the coin. It’s chances of being correct obviously do not exceed .5. So you are not justified in believing it’s prediction. And so it is not rational to believe it.

    April 26, 2007 — 9:08
  • Alexander Pruss

    Mike:
    Suppose you knew that a coin is fair (in the relevant context), but you tossed it a thousand times and each time got heads. I think this would provide you with very strong evidence that the coin is not fair, and would undercut your knowledge that the coin is fair, unless that knowledge was based on indubitable premises.
    It is quite possible for knowledge to be undercut by new evidence, and hence it is quite possible for one to be justified in rejecting what one knew. Of course, what one knows is true, but sometimes we are justified in rejecting what is true.
    Suppose that initially I know the coin is fair and assign credence 0.999999 to the claim that the coin is fair. Suppose I assign credence 0.000000001 to the claim that the coin is biased in such a way as to guarantee heads to come up. Plugging all this into Bayes’ theorem leads to the conclusion that, upon having received the evidence of the run of a thousand heads, the probability that the coin is fair will be very close to zero.
    Maybe we are talking at cross-purposes. Do you assume that knowledge implies a credence of 1? That seems too restrictive. I know that I am writing this post now, but my credence is less than 1. It is, just barely, epistemically possible (maybe of the order of 0.000001 — I could imagine being mistaken about a thing like this approximately once in a lifetime) that this is but a dream. I know that if I put cream in coffee and stir, it will spread out. But I also know that the probability that it will spread out is less than 1 according to quantum mechanics.

    April 26, 2007 — 9:28
  • Suppose you knew that a coin is fair (in the relevant context), but you tossed it a thousand times and each time got heads. I think this would provide you with very strong evidence that the coin is not fair, and would undercut your knowledge that the coin is fair, unless that knowledge was based on indubitable premises.
    I don’t think so. The demon knows that the future is in fact open. Naturally he places credence 1 on the proposition: the future is open. Along with many others, I don’t reserve credence 1 for necessary truths or tautologies alone. I place credence 1 for anything that I fully believe is true. For instance, my credence is 1 that the fair coin comes up heads or tails and my credence is 1 that the sun will rise tomorrow and my credence is 1 that I will never dead lift 700lbs or run a 4 minute mile. And I am prepared to bet on these accordingly. But none of these is a necessary truth. The demon might well place credence 1 that the future is open. But then no sequence of lucky predictions will lower that credence.

    April 27, 2007 — 16:15
  • Mike,
    All I need for my argument is that the demon can reasonably place credence less than 1 on the openness of the future. I think that placing credence less than 1 on the openness of the future is compatible with knowing that the future is open. You disagree. But I do not think we need to settle that disagreement for my point to go through, namely that the demon can reasonably come to the conclusion that the future is not open, and that the book or coin or whatever predicts the future.
    I agree that it can be reasonable to assign credence 1 to things that are not necessary truths or tautologies.
    However, I find Lewis’s principal principle very plausible in many contexts.
    Quantum mechanics implies there is a non-zero probability that you or I will run a 4 minute mile. Particles might just pop into a configuration that produces supercharged muscles. With a bit of thought, I can even give you a lower bound on that probability. (Here’s one. Let f_1(n) = 2^n. Let f_(k+1)(n) be f_k(f_k(…(n)…)) where there are f_k(n) occurrences of f_k. Let a_n=f_n(n). Then a_n is an incredibly quickly rising series. I am pretty sure that the probability that I run a 4 minute mile is at least 1/f_100(100).)
    Similar quantum mechanical principles imply that it is irrational to assign credence 1 to just about any claim about the physical world, barring some way of knowing that transcends the empirical (e.g., faith).
    But I do not think it follows from this that we have no way of knowing these claims about the physical world. We know them, but we assign them probabilities like 1 – (1/f_100(100)).

    April 27, 2007 — 16:34
  • Similar quantum mechanical principles imply that it is irrational to assign credence 1 to just about any claim about the physical world. . .
    Alex, I agree about the principal principle. I have no idea how you’d know that that is a lower bound on the chances that you run a 4 min. mile! In any case, quantum princples do not have the implications you note. It is perfectly consistent with quantum principles that the world contains many deterministic enclaves. For all we know, your not running a 4 min. mile is included in such an enclave. But leaving that aside, there is nothing in quantum principles that make it at all chancy that the future is open. The openess of the future does not supervene on any set of natural properties. It is instead a metaphysical fact independent of physical facts about the world. So quantum features of the physical world are not relevant to D’s credence about openess. He should assign it credence 1.

    April 27, 2007 — 18:12
  • Dear Mike,
    If QM is correct, then particles can simply indeterministically tunnel to wherever their wave functions have non-zero values, and the wave functions generally will have non-zero values almost everywhere. So it is possible that as I run, my particles just keep on tunnelling ahead, thereby increasing my effective travel speed. Alternately, it is possible with non-zero probability for particles from my clothes to tunnel into my legs and form muscles of the sort that a cheetah has.
    You’re right that the quantum stuff is not directly relevant to the credence about openness. I was simply using it to illustrate the thesis that one can know something and assign it a probability less than 1.
    Now, if the premises for the argument for OT are not completely self-evident, and they’re not, and if they are not certain pieces of sensory data, and if they are not guaranteed by supernatural faith (which D probably lacks anyway), then it may very well be that D is within his epistemic rights to assign a credence strictly less than one to their conjunction. (Hey, I assign zero to them. 🙂 )

    April 27, 2007 — 19:30
  • Correction: I assign zero to the conjunction of the premises, not to each particular one.

    April 27, 2007 — 19:30