On Being Justified That You Will Be Justified
December 7, 2007 — 7:48

Author: Andrew Cullison  Category: Religious Belief  Comments: 12

Here is an example of some reasoning that should be of interest to philosophers of religion. We don’t have a complete science of the brain yet, but look at what recent neuro-science has shown. Don’t the recent successes of neuro-science give us good reason to believe that there will one day be adequate evidence for the proposition that the brain is completely scientifically explainable. Shouldn’t we then now think that the brain is completely scientifically explainable?
I think many would find something like the above reasoning plausible. I have to admit I find it plausible, and there are a few other kinds of arguments out there in philosophy that make a similar kind of move.
But here is where I start to get worried. For these strategies to work, something like the following principle must be true.
(1) If S is justified in believing that at some future time S will be justified in believing P, then S is justified in believing P now.
However, this won’t do. I am justified in believing that, at some future time, I will be justified in believing that my dog is dead. After all, I’m pretty sure my dog isn’t immortal. That doesn’t mean I’m justified in believing that my dog is dead now. We might think there is a quick fix that can get around this.


(1*) If S is justified in believing (at T), both that
i. At some future time it S will be justified in believing P
AND
ii if P is true at some future time, then P is true at T,
then S is justified in believing P at T.
This gets around the dog worry, and would allow the inference in the first paragraph to go through. But suppose I have severe alzheimer’s. My wife tells me that once I am to the point where I can’t remember her anymore, she will insist to my face that she has never met me before. I am justified in believing right now that I will be justified in believing that I never met Sarah. I’m also justified in believing that if I never met Sarah at some future time, then I never met her now. So this revised principle doesn’t work either.
You might think the problem is that in both counter examples the person in question is justified in believing at T that the relevant proposition is false at T.
(1**)
IF S is justified in believing (at T), both that
i. At some future time it S will be justified in believing P
AND
ii. if P is true at some future time, then P is true at T,
AND
iii. S is not justified in believing that P is false at T,
THEN S is justified in believing P at T.
But we can modify the alzheimer’s case, suppose I learn that someone named Sue is going to do the same thing that my wife does. I am given very good reasons to believe that I should right not be skeptical whether or not I have ever met Sue. So, I’m not justified in believing (at T) that I have never met Sue. In this case, I would satisfy the antecedent of (1**), but I don’t think I’m justified in believing (at T) that I never met Sue.
Thoughts?

Comments:
  • Hi Andy,
    The reflection principle is similar to the principles you’re considering, except that the reflection principle is formulated probabilistically. People have given similar counterexamples to the reflection principle; the basic form being that you shouldn’t defer to your future self because your future self could be in an epistemically worse position than yourself.
    By the way I don’t think neuroscience has come even close to showing that m=b. Neuroscience can’t distinguish between lawful dependence and identity.

    December 7, 2007 — 9:12
  • Hi Ted,
    I agree that neuroscience hasn’t come close. I just list this as an example of something I’ve heard people say, and I’m more interested in whether the argument could go through – if we grant the claim about the success of neuroscience.
    I’m interested in these discussions of the reflection principle. Where does that come up?

    December 7, 2007 — 9:20
  • …something similar in van Fraassen’s “Belief and the Will” (1984, the Journal of Philosophy) and this (2005)

    December 7, 2007 — 9:45
  • “Don’t the recent successes of neuro-science give us good reason to believe that there will one day be adequate evidence for the proposition that the brain is completely scientifically explainable.”
    The future of science is unpredictable. The argument form: “Many As are Bs, therefore all As are Bs” is a bad one, either inductively or deductively. In particular, the argument: “We have explained many As in terms of C, therefore all As can be explained in terms of C” is a bad one.
    Here is a relevant post of mine.

    December 7, 2007 — 11:36
  • But learning about this forthcoming evidence does justified me now in believing that certain states of affairs will obtain in the future. For instance, I am justified now (at t) in believing that,
    1.(En)(at (t + n) I don’t know my wife & (my wife & I are alive)).
    2.(En)(at (t + n) I don’t know Sue & (Sue & I are alive)).
    Had I not received news about what I will be justified in believing in the future, I would not now be justified in believing (1) and (2).

    December 7, 2007 — 11:55
  • Enig,
    Thanks for those references. I’ve already checked them out and they’re very helpful.
    Alexander,
    I think you’re right that the first step in these arguments is probably wrong. Unless proponents of these arguments can come up with some way to restrict the “Many As are Bs, so All As are Bs” in some way – Although I’m not sure how that would go. So I agree that a good way to take down these arguments would be to challenge that step.
    Generally: I’m still interested in how to formulate the other epistemic principle that seems to be assumed in these kinds of arguments. As Mike notes. There is something to the idea that justification that we will be justified in believing P justifies us now in believing P. But I’m sort of at a loss as to how to spell out the principle.

    December 7, 2007 — 12:57
  • Andrew,
    I found “Belief and the Will” very insightful, but also, very dense and difficult. A nice and clear exposition of it is in Plantinga’s Warrant: the Current Debate, chapter 6, and his critique is in chapter 7. What’s nice about the discussion is that you can see clearly how Reflection connects with Bayesian coherence and conditionalization.

    December 7, 2007 — 13:33
  • Andy,
    Besides the van Fraassen references you can check out Adam Egla’s “Reflection and Disagreement” (and the references at the end of his article). There’s a pretty substantive literature on the reflection principle which will probably come up in searches on “van Fraassen reflection principle.” I think that some version of the reflection principle is true; it reflects a cross-temporal consistency requirement on rationality. I’ve been working on the disagreement literature a bit and I defend a version of the reflection principle that treats one’s future (or past) self as an epistemic superior.

    December 7, 2007 — 13:56
  • Andrew Moon,
    That Chapter 7 was enormously helpful. Thanks. I just skimmed through the relevant sections.
    Ted,
    I like your idea. So, I suspect we would reformulate the principle I’ve been discussing as something like…
    IF S is justified in believing (at T), both that
    i. At some future time it S will be justified in believing P
    AND
    ii. if P is true at some future time, then P is true at T,
    AND
    iii. S is justified in believing that in the future S will not be (insert conditions that disqualify someone as an epistemic peer)
    THEN S is justified in believing P at T.
    That has a ring of plausibility to it.

    December 7, 2007 — 14:11
  • Suppose I believe it is likely that a certain piece of evidence E will be found, and I also believe that P(H|E) is high. Then it seems I should now take it that the probability of hypothesis H is at least as big as P(H&E)=P(H|E)P(E). If both P(H|E) and P(E) are high, then their product will be high, and so P(H) will be high.
    This isn’t full Reflection, I suspect, but in the case at hand it’s all one needs. Let’s be more concrete. I take that the idea is that we have evidence that we will discover that every aspect of mental functioning is correlated to an aspect of neural functioning on which it supervenes. Let E be the correlation claim. Let us suppose P(physicalism about human minds|E) is high. If P(E) is high, then P(physicalism about human minds) is high. So that works.
    The previous paragraph was predicated on “discover” being a success term. If it’s not a success term, then we have problems. We need to consider whether we can trust that future science. For instance, we have to consider whether the expected “discovery” wouldn’t be the product of discarding phenomena incompatible with it, etc. Suppose we have evidence to think that science will continue to be a reliable guide to truth. Let D be the claim that we will “discover” E to be true, in the weaker sense of “discover” that is compatible with error. So, then P(physicalism about human minds) is at least as great as P(H&D&E)=P(H|D&E)P(E|D)P(D). By the above assumptions, all three factors on the right are high, so the left hand side is moderately high (but lower!).
    But all this is fishy because the evidence for what science will discover is weak. We know that anomalous phenomena are found frequently. Most of them can be fitted into preexisting paradigms, but some can’t. We know that science keeps on finding very surprising stuff.
    And besides that, there is the skeptical meta-induction. Almost all scientific theories are disproved within three hundred years. Hence, almost all scientific theories are false. 🙂

    December 7, 2007 — 14:43
  • First comment! Hope you don’t mind.
    “…then I never met her now.”
    But isn’t this based on her saying something that is false? Her saying it doesn’t actually make it so. How then can you be justified in believing it, simply because it was said? She wasn’t justified in saying it in the first place.
    “Shouldn’t we then now think that the brain is completely scientifically explainable?”
    Sure. Why not. I’m not sure though, of two things:
    1) Does a complete explanation of the brain also mean a complete explanation of the mind?
    2) How this would an explanation preclude in any way the reality of something “supernatural” (which no doubt is what Atheists will claim — it really is the implication of the question).
    I think regarding #1, we can’t ever know that we’ve explained function entirely. This would really just be induction, wouldn’t it? Who knows what lies above and beyond our capacity to apprehend?
    And regarding #2, how would an explainable “interface” negate that which can be interfaced with? I mean, this kind of logic leads to absurd conclusions: you can explain how a keyboard works… and that proves that the computer doesn’t exist and is only a by product of the keyboard’s function…
    Am I coming out of left field? I don’t want to assume you were going somewhere you weren’t. I was just following your initial question to its implications.
    I’m really sorry if this was too tangential!

    December 7, 2007 — 15:36
  • Andrew Cullison,
    Great! I’m glad to be of help. The explanation and critique are pp. 148-161, but you probably know this.

    December 8, 2007 — 10:54