Virtual Colloquium: Matthew A. Benton, John Hawthorne, and Yoaav Isaacs, “Evil and Evidence”
November 18, 2016 — 6:00

Author: Kenny Pearce  Category: Problem of Evil  Tags: , , , , , , ,   Comments: 26

It’s Friday again, and time for the Prosblogion Virtual Colloquium! A brief administrative note: there will be no colloquium next week (November 25) due to the American Thanksgiving holiday. We will return on December 2.

For today’s colloquium, Matthew Benton presents “Evil and Evidence,” a paper he co-authored with John Hawthorne (USC) and Yoaav Isaacs (UNC). Dr. Benton received his PhD from Rutgers in 2012 and subsequently held positions at Oxford and Notre Dame. Currently, he is assistant professor of philosophy at Seattle Pacific University. His papers on epistemology and other topics have appeared in such journals as Analysis, Philosophical Studies, Synthese, and Philosophy and Phenomenological Research. Additionally, he is co-editor (with John Hawthorne and Dani Rabinowitz) of Knowledge, Belief, and God: New Perspectives in Religious Epistemology, soon to be published by Oxford University Press.


Evil and Evidence

Introductory Comments by Matthew Benton

The problem of evil presents the most prominent argument against the existence of God. Recent probabilistic or evidential versions of the argument, due especially to William Rowe (esp. “The Problem of Evil and Some Varieties of Atheism,” 1979; cf. also 1984 and 1996), suggest that the existence of evil (or its distribution and magnitude) are evidence against the existence of God. As such, these arguments claim that at least in the abstract, evil makes less likely the existence of God; and perhaps even given all of the other available evidence, it is strong enough evidence to make belief in God problematic.

Skeptical theists contend that these are not good arguments, and many go so far to deny that evil is evidence against the existence of God. To cite just a few prominent examples: Peter van Inwagen (“The Problem of Evil, the Problem of Air, and the Problem of Silence,” 1996, 169-71) says that “While the patterns of suffering we find in the actual world constitute a difficulty for theism…, they do not—owing to the availability of the defense I have outlined—attain the status of evidence”. Daniel Howard-Snyder and Michael Bergmann (“Evil Does Not Make Atheism More Reasonable than Theism,” 2004, 14) argue for the conclusion that “grounds for belief in God aside, evil does not make belief in atheism more reasonable for us than belief in theism”; and Richard Otte argues that “theists should not believe [that] evil, or our ignorance of a good reason for God to permit evil, is evidence against religious belief or the existence of God, at all” (“Comparative Confirmation and the Problem of Evil,” 2012, 127), and that “at best, the theist should refrain from judgement about whether evil is evidence against the existence of God” (2012, 131).

Skeptical theists have various reasons for arguing as they do, involving such notions as ‘CORNEA’ (the ‘Condition Of ReasoNable Epistemic Access’; Wykstra “The Humean Obstacle to Epistemic Arguments from Suffering,” 1984), epistemic appearances, ‘gratuitous’ evils, ‘levering’ evidence, the representativeness of goods, and radical skepticism about the probabilities of evil on the hypothesis of theism, or of no good we know of justifying the kinds of evil in the world. In this essay, we consider each of these notions and aim to dispel some confusions about them, and along the way attempt to clarify the roles of such notions within a probabilistic epistemology. In addition, we examine the role that distinct accounts of evidence play in the discussion, and we develop new responses to the problem of evil from both the phenomenal conception of evidence and the knowledge-first view of evidence.


The full paper is available here. Comments welcome below.

Comments:
  • Tim Perrine

    Hi Matt (et al.)

    Thanks for posting the paper; I’ve been looking for an excuse to re-read it, and this provides the perfect excuse! Since I’m familiar with the CORNEA stuff, I’ll just restrict my comments to your discussion of that principle.

    First, I’m not sure I understand how the counterexample in 6.1 is supposed to work. I take the intuitively unacceptable result is Tom coming to the conclusion that “guess I don’t have reason to think that Susan is trashing the office because I embezzled our company’s money.” But I don’t know if you’ve told me enough about the case for it to be clear that this is intuitively unacceptable.

    For instance, suppose part of Tom’s past experience is overhearing people say that Susan is known to make similar arithmetical errors or that, when she is very angry in her personal life, that spills over to her professional life where she makes wild accusations, or she is paranoid about business partners embezzling money. If these are part of Tom’s past experience (and we’re keeping them “fixed”—though I’m sure I quite know what that means), then it seems perfectly intuitive for Tom to reasonable come to that conclusion. After all, Susan’s behavior is the kind of thing Tom should expect if Susan is making simple errors when reading the books (and not finding the subtle tweaks) or had a horrible fight with her partner that morning, and for all Tom knows, those are the things that happened.

    In short, could you perhaps explain a little more how exactly you see this counterexample working?

    Second, I’m perplexed by the example in 6.2. Applying the version of CORNEA Steve and I defend to your counterexample (pp. 15-6), we get:

    It is reasonable for the doctor to believe that (the doctor’s visual experience of not seeing a virus & the dice coming up three) is levering evidence for (the needle has a virus) only if it is reasonable for the doctor to believe that the probability of (the doctor’s visual experience of not seeing a virus & the dice coming up three) is less than .5 give (the needle has a virus).

    Now it would be reasonable for the doctor to believe that: given some modest assumptions, by my calculation, that conditional probability would be below .16. But this is no counterexample to CORNEA. For it merely states a necessary condition, not a sufficient condition, for it being reasonable to believe something as levering evidence.

    You all are aware of this. You don’t claim that it is counterexample. Rather, your criticism is that CORNEA’s constraint is too easy to meet due to evidence being fine-grained (p. 16). But this is what I find perplexing. For this example has nothing to do with fine-grained evidence. For instance, worries about fine-grained evidence would presumably concern issues about how fine-grained to make the doctor’s experience as evidence. (Do we need this particular experience (or propositions about it)? If not the particular one, but experiences with the same features, what are those features? Etc.) Rather, in this example, we have (purported) evidence (proposition about a doctor’s experience) that intuitively isn’t evidence at all for the hypothesis under discussion. In your example, you then conjoin to that proposition a proposition that is low probability and independent of both the (purported) evidence and hypothesis under discussion

    Now I agree that CORNEA is not meant to handle cases where we conjoin to (purported) evidence a proposition that is low-probability and independent of both the evidence and hypothesis under question. But this strikes me as unproblematic. After all, who would try to defend the claim that something is levering evidence for a hypothesis by doing that? (We don’t need an epistemological principle to tell us that anyone who breaks out a Parcheesi board when claiming something is evidence is making a mistake.) Further, the main application of this principle–to Rowe’s evidential argument for evil–does not seem hampered by the fact that we can craft weird situations where CORNEA is satisfied. So I’m perplexed as to why you think this example is problematic for the principle.

    Best,
    Tim

    November 18, 2016 — 10:41
  • Matt Benton

    Thanks, Tim (and anyone else below who comments). I’m teaching most of today (and in Pacific time zone), but will try to get to these once I’m done…

    November 18, 2016 — 11:51
  • Matt Benton

    Hi Tim:

    Okay, regarding your first question concerning the counterexample in section 6.1, CORNEA Holding Experiences Fixed… There we give a case to show that when holding experiences fixed, CORNEA is too strong: it blocks inferences it should not. (Tom in our Tom and Susan case.)

    In our telling of the case, we already provided all the details relevant to being able to see that CORNEA, in that case as stated, is too strong. In your post, you’ve changed the case to give Tom some further information. In your new case, one might have the judgment that Tom shouldn’t infer from his experiences to “probably, there is no non-embezzling reason behind Susan’s trashing the office”; and it might be that CORNEA somehow validates that judgment. But that there is another, different case like yours, which CORNEA handles well, does not in itself somehow undermine the case we offered, which shows CORNEA to be too strong.

    Your second question is about the next section, 6.2 CORNEA Letting Experiences Vary. There we give a case to show that when letting them vary, CORNEA is too weak: it is easily satisfied and so lets through inferences that should presumably be blocked. This is why we characterize the problem as being that, when we let experiences vary, CORNEA “can’t do the work it was supposed to do.” You’re right that we’re not thinking this shows it to be false, since it isn’t a principle giving a sufficient condition. But nevertheless, it shows, we think, that when applying CORNEA as intended, when letting experiences vary, it doesn’t do the job it’s advertised as doing. [Note that we don’t discuss levering evidence applied to CORNEA; we save our concerns with levering notions for section 9. Note also that our use of the case doesn’t depend on our side claim about how the point may generalize, given that our evidence is typically fine-grained enough to pass muster with CORNEA.]

    CORNEA is a principle about what makes for entitlement to claim “it appears that p”. But we think that there’s no need to get into the epistemology of such appearances in the first place. Bayesian epistemology gives an exceedingly nice way to update on evidence without a detour into the ideology of epistemic appearances. That’s why at the end of 6.2, we agree with your judgment of the needle case; we just think there’s a much easier route to it.

    November 18, 2016 — 17:50
    • Tim Perrine

      Hi Matt,

      If I may follow up briefly, then:

      Regarding 6.1, I guess I just don’t understand what role “keeping experiences fixed” is meant to do in this section. As I see it, Tom should be asking himself, “Is the probability that Susan trashes the office and yells those things low, given that she has not discovered my embezzling?” As you intend the case as a normal one, where Susan isn’t like I described in the past, then it seems perfectly reasonable for him to say, ‘Yes.’ I’m struggling to see what role “keeping experiences fixed” or facts about “experimentally matching worlds” are meant to make me think he should answer ‘No.’ Maybe I just don’t understand this reading of CORNEA.

      Regarding 6.2, I am contesting your claim that it “can’t do the work as supposed to” or “doesn’t do the job it is advertised as doing.” The work CORNEA was originally meant to do, or supposed to do, is to block certain particular inferences—inferences like the one the doctor makes in the original case or the inference Rowe made. You give a case where CORNEA fails to block an inference, where (as I see it) that is due to some formal features. I think those formal features are rarely present. But, more important, since those formal features are not present in either the original doctor case or the inference Rowe makes, I don’t see how it shows that it can’t do the work as supposed to.

      To use an analogy, suppose I want to block the inference ‘Abe’s belief that p is false. Therefore, Abe knows that p.’ So I craft a general principle: S knows that p only if p is true. You then give me the following case: ‘Beatrice’s belief that q is true but the result of an unreliable cognitive faculty. Therefore, Beatrice knows that q.’ You then point out that my principle doesn’t block this inference, but we do want to block that inference. That is right. But it doesn’t show that my principle is false or couldn’t block the inference it was meant to initially block. At best, it just shows that we may need more than one principle to block the inferences we regard as problematic. What features of your example do you think make it different from this one involving Abe and Beatrice?

      Best,
      Tim

      November 19, 2016 — 9:47
      • Angra Mainyu

        Hi Tim,

        Regarding CORNEA while keeping experiences fixed, I agree with Matt’s position and with his reply to your more specific scenario.
        However, if you prefer a more specific scenario that works as a counterexample, we can add the following conditions.
        Instead of supposing that “part of Tom’s past experience is overhearing people say that Susan is known to make similar arithmetical errors or that, when she is very angry in her personal life, that spills over to her professional life where she makes wild accusations, or she is paranoid about business partners embezzling money.”, let us suppose the following (in addition to the description in the paper, except for the “not much about them is of note” part): Part of Tom’s past experience is overhearing people say that Susan is known to be excellent at math, so good that no one recalls her making an error. Additionally, she is said to be very calm in her personal life. In her professional life, his experience (and that of other people Tom is aware of) Susan is always level-headed; in fact, she’s a voice of reason when unwarranted rumors against a person fly; she carefully, methodically and calmly debunks wild accusations, she is known for not jumping to conclusions even when she’s in a tough situation, etc. On the other hand, she’s known to get really angry when someone deliberately does something unjust that negatively affects her. ”

        Now, even under *those* assumption, both probabilistic and counterfactual CORNEA (holding experiences fixed) would block Tom’s inference that Susan probably has no non-embezzling reason for trashing the office. You can add further details if you like, but CORNEA (in all of those variants) would keep blocking the inference, at least provided that there is nothing in the description of Tom’s experience that *entails* that Susan had a non-embezzling reason.

        At any rate, examples aside, the paper shows that CORNEA holding experiences fixed is a bad principle after giving Tom and Susan’s example, and independently of it (in the paragraph beginning with “But it gets worse…”).

        November 20, 2016 — 7:58
  • I need to argue on this one against probabilities being able to lead us to the truth, or hint at God. We will either believe or not believe.

    There is the idea that Plato wrote into his dialogue character Socrates, of The Desert of Forgetfulness and the Spring of Unmindfulness. It’s 101, I know. Before taking this journey to its new incarnation, the soul chose its new incarnation. It becomes a side note here, not my point, that philosophers burn with the questions about previous life, yes, even after taking the red pill of the Matrix in order to live this life.

    Let me further simplify, with evidence. In 2003, I flatlined from a heart attack. The people in the ER brought me back. I cannot remember, only with a sense, between the time I died and the time I heard a doctor say to me, while I still had my eyes closed, looking at the red if the inside of my lids, “Don’t be afraid, Mr. Bowden.”

    I am happy to be back, as are many who have died, but also many who have attempted suicide, yet not to the point of death or even near death, happy to be back. Please don’t kill us.

    It is a choice of life we would make, evils and all, even if we were in Paradise or Eden. It is something God, if you believe in God, has created such that we would experience, not what he would want to or need to experience, for either spiritual growth or simply the experience of it, but what we would want or need.

    One reason I would have chosen to come back to my life in 2003, was the thought of my daughter. After I came to, and all the people in the room were scurrying around turning knobs and checking graphs of my condition, I thought to tell them not to worry so much, that if it were time to die, then this would be it. It’s a common experience I understand now, a sort of way we are prepared for impending death, to calmly see it as inevitable. By observing the hospital Stat-call staff, I realized that there was something in life that drove these people to care about me, such that they were all in concert, focused with high intensity, to keep me alive. Why?

    How did they get to care so much, and I thought of my daughter. I had not left her a note saying that I was going to the hospital, and now she would be home, and not know what has happened to me. I needed to get a message to her.

    Here is my moral. If you want to live, then you have no argument against God creating this space for us, this experiential video game, or suspense movie, or horror movie, as it were. We choose this each, and philosophers contemplate it.

    Let me explain further, because I sense learned arguments with logic symbols resulting. Before, but especially since, I have experienced terrible family emotions with causes that could be called evil, and am physically disabled, with the experience of some either ignorant or evil doctors. (Many very good and brilliant ones as well, I should add.)

    I still wish to live. Nothing in my life is an argument against God.

    Let’s look at the other side, however, which is equally incorrect with assumptions of probability. Let’s return to before we ever “chose” life on Earth for our souls, in this time we live. There we were, in some sort of Paradise, say, talking to a Soul Travel Agent, who was telling us that there is this place that many souls find to be an “unforgettable” experience, it even has evil in it, and is called “Earth in a Milky Way”. We’d have to go through the Forgetting Desert and Drink from the Unmindful Spring to get there, but it would be worth it.

    There we would be, in Paradise ~~ wow ~~ and many of us would be amazed that we would have such a choice, that there could ever exist a non-Paradise. The most religious one in our travel group says, “This is proof that there is a God” (as most in Paradise do not believe in such things). The Travel Agent then says, “On Earth, most think that this is evidence that there is no God, and believe a Paradise is evidence that there is. You ready to go?”

    November 18, 2016 — 22:19
  • Angra Mainyu

    Hi Matt (et all)

    Very interesting paper, thanks for posting that.

    I tend to agree that skeptical theism is not right, though I’m a lot more sympathetic to the argument from evil than you are – unsurprisingly, given that I’m not a theist and deem it successful.
    But that aside, I have a couple of questions/worries/objections about some of your points:

    1. You say probability of tautologies is 1, and it won’t budge.
    I don’t think that’s true, for the following reasons:

    Let n1 be the number 10^(10(10^9!!!!!!!!!!!!!!!!)))+41, P(n1) the proposition that n1+61 is prime.
    I don’t know whether P1 is true, but isn’t the case that either P(n1) is a tautology, or ¬P1(n1) is?
    Maybe you’re using “tautology” in a way in which that is not so. I’ll address that possibility below, but assuming one of those is a tautology, I would say that I don’t assign 1 to either P(n1) or to ¬P(n1), and I don’t think I ought to, given my knowledge and cognitive limitations.
    Moreover, if a team of mathematicians announced that it’s not a prime, I would significantly lower the probability of P1 (and I think I should), but without assigning it a zero, even if I would assign a probability very close to zero (what if they got it wrong?).
    If neither P(n1) nor ¬P(n1) is a tautology, I would go with the following statements:

    P(n1)’: There is a proof of P(n1) from the Peano axioms.

    The argument remains the same, if either P(n1)’ or ¬P(n1)’ is a tautology.
    If neither of them is a tautology, let F1 be a very, very long first-order formula, and then let Q1 some statement substituting some ordinary language true sentences for each of the variables (let’s say that we have written it down, with a computer). Q1 might be a tautology or it might not be, but before we assess that, we ought not to give it probability 1, I think. If it turns out that we can’t in practice assess whether it’s a tautology because we have insufficient cognitive/computing power for that, then I think we ought not to give it a 1.

    What about cases in which we know that something is a tautology?
    Even then, knowing something does not require assigning it probability 1. Moreover, given our cognitive limitations, perhaps in nearly all cases at least, 1 is too high. Perhaps, we ought to take into consideration the very improbable case that we got it wrong, and it’s not actually a tautology. Moreover, it seems to me that it’s generally (if not always) possible for there to be evidence against a tautology, in the form of experts on the matter who tell us that we got it wrong, or linguistic evidence of how people use the words supporting the hypothesis we got the meaning of a word wrong (even if we did not), etc.

    On a related note, you say that “any evidence that entails that there is no God (including the proposition that there is no God) is levering evidence against the existence of God no matter what”, but wouldn’t that require that we assign a high probability to the hypothesis that the evidence entails that there is no God (even if true and tautological)?

    A potential way around that is to say that tautologies have probability 1 even if we don’t know that. But then, I worry that whatever principle underlies such a hypothesis (which I think is false, in the relevant sense of probability, but that aside) might be extended to propositions about what an omnimax person (i.e., omnipotent, omniscient, morally perfect) would or would not allow.

    2. Regarding Paradise World, I agree (leaving aside some potential difficulties assessing the scenario) that it would be evidence for the existence of God (so this is not a challenge to your main arguments), but I do not see why it would be overwhelming, at least under a definition of “God” that includes omniscience, omnipotence and moral perfection – which seems to me to be the concept you have in mind, given your point on page 32 (or do you have a different concept of God in mind in the first part of the paper? Please let me know if that’s the case).
    In my assessment, a more probable hypothesis in a world like that is that there is an omniscient, morally perfect creator who is very powerful but not omnipotent. An even more probable hypothesis is that there is morally perfect creator who is very powerful and knowledgeable, but neither omnipotent nor omniscient. Granted, I’m an atheist under that conception of God, so it’s unsurprising that we make considerably different assessments, but I’d like to ask whether there is a specific feature of the Paradise world on the basis of which you reckon the evidence for God to be overwhelming, or it’s an intuitive assessment you make but without having any such specific features in mind.

    3. On page 8, you say “In such a case the existence of evils for which one knows of no good reason would not provide any evidence to think that the actual world is in category [] rather than in category [].”
    Did you mean “is in category [4], rather than in category [2]”? (You’re right either way, but it sounds odd to me).

    4. You mention possible worlds with and without God, but there are philosophers (both theists and not theists) who hold that God is either necessary or impossible. Do you disagree, or have a different kind of possibility in mind? (if the latter, I’d like to ask what kind).

    5. On page 16, you mention “most of the closest worlds”. Have you ruled out that there might be infinitely many of the closest worlds, and the cardinality of those in which the needle isn’t clean and the die lands on 3 is the same as the cardinality of those where the needle isn’t clean and the die doesn’t land on 3? (I don’t know whether the actual world contains infinitely many particles, but if it does, it seems to me there are infinitely many of the closest worlds).

    November 18, 2016 — 23:49
    • Angra Mainyu

      Regarding point 5., I’d like to make a correction: Instead of “I don’t know whether the actual world contains infinitely many particles, but if it does, it seems to me there are infinitely many of the closest worlds”, I should have said more tentatively “I don’t know whether the actual world contains infinitely many particles, but if it does, it seems to me there may well be infinitely many of the closest worlds”.
      Even if the actual world contains infinitely many particles, it might still be that there are only finitely many of the closest worlds with the relevant property. But it’s not clear to me that there are finitely many, if the actual world contains infinitely many particles.

      November 19, 2016 — 0:20
    • Yoaav Isaacs

      Hi Angra,

      Let me try to address some of the technical issues you’ve raised. I definitely want to stand by our claim that tautologies get probability 1 no matter what, but I think there’s a bit more flexibility left open than you might expect.

      I think a little bit of historical perspective is helpful. Probabilities were rigorously formalized by Kolmogorov in a set-theoretic framework. A probability space is a triple, , where Ω is the set of all possible outcomes, F is a field of subsets of Ω, and P is a function assigning numbers to the elements of F in the familiar way. One crucial aspect of this familiar way is that the probability of Ω is 1. If you don’t want to give Ω probability 1 –– if there’s some possibility not in Ω to which you’d like to give some credence –– that just shows that you didn’t make Ω as big as it should have been; Ω should contain every possibility you want to entertain. Now there is a potential issue here; sets can get pretty big, but there are still constraints on them. There’s a good case to be made that the possibilities worth taking seriously are greater than set-sized. But even in that case, the lesson would just be that a probabilistic epistemology cannot give all possibilities individual attention at once. There’s still nothing wrong with setting up an Ω that covers all the bases but which has a modest number of elements.

      So what goes into Ω? Happily, there are no constraints beyond those of set theory. For example, the elements of Ω certainly don’t have to be anything like metaphysically possible. And that’s a good thing, too, as I certainly want to have intermediate credence about all sorts of identity-claims (like about who Jack the Ripper was) which are either metaphysically necessary or metaphysically impossible. Similarly, the elements of Ω don’t have to be a priori possible. If I want to assign intermediate credence to 1 + 1 = 3, then all I need is an element of Ω in which 1 + 1 = 3. The only real constraints are those that come out of probabilistic coherence about my numerical assignments. If I have credence .3 that 1 + 1 = 3, then I just have to have credence .7 in the negation of 1 + 1 = 3. Any theorem of classical logic will correspond to Ω and thus get probability 1, but that’s the only constraint on assignments of probability 1.

      As it turns out, there are some deep connections between sets and sentences. Stone’s Theorem states that every field of sets is isomorphic to a Boolean algebra. But if we identify logically equivalent sentences of a propositional language we get a Boolean algebra (called a Lindenbaum algebra). Therefore (leaving aside a couple of technical details that aren’t relevant here) just as we can think about probabilities in a set-theoretic way we can think about probabilities in a sentential way. The sentential way of doing probabilities doesn’t impose new constraints; any flexibility we had before we still have. We can still assign intermediate probability to sentences about identity claims and we can still assign intermediate probability to sentences about mathematical claims. Only theorems of classical logic are really locked-in at probability 1. And that’s a constraint that I’m happy to go along with. It seems very hard to make probabilities hang together without that constraint. If fallible agents sometimes wind up having degrees of belief that don’t assign 1 to tautologies, then those fallible agents have sadly left the well-behaved domain of probabilistic epistemology.

      Changing topics, please read “most of the closest worlds” as meaning “a majority of the measure over the closest worlds”. Even if there were only finitely many closest worlds, we certainly wouldn’t endorse a claim that those worlds have to be equiprobable.

      Best,
      Yoaav

      November 24, 2016 — 1:30
      • Angra Mainyu

        Hi Yoaav,

        Thanks for your detailed reply.
        I will address your points in greater detail later, but I’d like to make a quick comment/ask a question about the interpretations of epistemic probability you mention in the paper, on page 18.
        It seems to me that the subjectivist interpretations you mention are in conflict with the assessment that theorems of classical logic are really locked-in at probability 1, since an agent’s degree of belief may not assign such a probability. Am I correct in concluding that you rule out subjectivist interpretations? (that’s not an objection; I rule them out too, and I actually thought you already ruled them out by doing Bayesian reasoning).

        Regarding the objectivist interpretations you mention, if we combine them with the assessment that theorems of classical logic are really locked-in at probability 1, it follows that any rational agent ought to assign probability 1 to the theorems. A consequence seems to me as follows: Let’s say Alice fails to assign probability 1 to the statement that ZF&WO->AC, because she’s unsure whether that follows (let’s this happens before any human being proved it), she’s failing to do what she ought to, or she’s not a rational agent.

        Am I getting this right?

        Best,
        Angra

        November 24, 2016 — 12:21
        • Yoaav Isaacs

          Hi Angra,

          Regarding the status of subjective probabilities, I do think that tautologies are guaranteed to have probability 1 even in a subjectivist framework. This is not to say that I think that it’s impossible for a person to have a degree of belief other than 1 in a tautology. (I take no stand on whether it’s possible for a person to have a degree of belief other than 1 in a tautology. Whether or not such a thing is possible is liable to depend on very finicky issues about what it is for a person to have a degree of belief.) Instead, it’s simply that if a person ever had a degree of belief other than 1 in a tautology that person’s degrees of belief would no longer constitute a probability function. Our probabilistic analysis treats probabilistic agents, and thus takes conformity to the axioms of probability theory for granted.

          Regarding your concerns about complex mathematical uncertainties, I think a lot depends on how the issue is formalized. Mathematics isn’t the only area in which there’s such a dependency, so let me talk about a simpler, non-mathematical case. Take the sentence “If Tom is married then Tom is not unmarried.” Is this sentence a tautology or not? Well, that depends. Probabilities only properly apply to nicely formalized propositional languages, and English is definitely not well-behaved enough for that. The question of whether the English sentence is a tautology or not is therefore best understood as the question of whether the appropriately formalized translation of the English sentence is a tautology. But there’s a problem: there isn’t really such a thing as *the* appropriately formalized translation of the English sentence. There are different possible formal translations. For example, it’s possible to have language in which “married” and “unmarried” are logically related, predicates “m” and “not-m”, say. The sentence would then be a tautology. (And it would be hard to avoid the conclusion that my shoes are unmarried, since one would certainly not want to say that my shoes are married.) But it’s also possible to have a language in which “married” and “unmarried” are not logically related (and this can be true even if it’s a priori that nothing can be both married and unmarried). If “married” and “unmarried” are logically unrelated predicates “m” and “u”, then the sentence is not a tautology. (And then it is entirely possible that my shoes are neither married nor unmarried. One would have a bit more flexibility regarding the extension of “unmarried”.) As far as your mathematical cases go, everything would just depend on how a statement is formalized. Any mathematical claim that is expressed as a theorem of classical logic would be guaranteed probability 1; any claim at all that is expressed as a theorem of classical logic would be guaranteed probability 1. A long and complicated sentence which is, in fact, a theorem of classical logic is guaranteed probability 1 just the same. It might be more natural to think about the issue set-theoretically than sententially; a long and complicated sentence which is a tautology just maps to Ω. Concomitantly, any mathematical claim that is not expressed as a theorem of classical logic is not guaranteed probability 1. And given the limits to the logicization of mathematics, there will be plenty of mathematical truth that does not amount to a classical tautology.

          Best,
          Yoaav

          November 26, 2016 — 12:54
          • Angra Mainyu

            Hi Yoaav,

            Thanks for your reply.

            Regarding formalization, most of the math examples I constructed are such that the relevant statements are theorems of classical logic (under standard formalizations). So, while plenty of mathematical truths (infinitely many) would not be guaranteed probability 1 given your claim about tautologies, also plenty – infinitely many – would, including most of my examples – and those, I reckon, shouldn’t get 1 in the given contexts.

            Here’s a brief example that doesn’t use math, but only first-order formulas.

            Alice is playing the following game: a computer flips a coin (e.g., using atmospheric noise as a source, like the coin flipper at random.org ).
            If it lands tails, it shows on screen a large formula, which is a theorem of classical logic. If it’s heads, then it shows a formula that is not a theorem of classical logic, but not the negation of a theorem, either.
            Alice is offered to bet $1 to win $1000000 that the formula is a not theorem of classical logic, and is given enough time to read the formula on screen. But she does not have nearly enough time to ascertain whether it’s a theorem (she would need years, if she could figure it out at all).

            Should she take the bet?
            I reckon she ought to take the bet, because she ought to assign 1/2 to the event that the formula is a theorem of classical logic (maybe a bit more if she has time to test at least one assignment of truth-values and it comes out true, but surely not enough to make the bet a bad one.)

            However, if she ought to assign 1 if the formula is a theorem of classical logic, then she ought not to take the bet in such cases.

            With regard to the subjectivist interpretation, you say in the paper that “If the probabilities at stake are subjective, then there’s nothing substantial to be ignorant about”, and “An agent might be uncertain about his own levels of confidence, but in that case some quiet reflection might help the agent understand himself better.”
            But given the constrains that your probabilistic analysis involves (i.e., they have to be Bayesian agents and they have to assign 1 to tautologies), it seems to me there is a lot to be ignorant about: an agent may well have no clue as to whether something is a theorem of classical logic, and might not be able to figure it out in years – if at all; in fact, any human being is in such a position, with regard to some theorems of classical logic, if the formula were presented to them, and in practice, people generally don’t realize immediately (if ever) that something follows (in classical logic) from something else.

            November 27, 2016 — 6:24
      • Angra Mainyu

        Hi Yoaav,

        Given that by “tautology” you mean only the theorems of classical logic, and leaving aside mathematical statements, I grant that my prime number example does not work as an objection, but I think the Q1 example does. A somewhat more detailed argument would be as as follows:

        Let’s consider the Great Internet Mersenne Prime Search.

        Let S be the conjunction of the axioms in first-order formal number theory (any standard version of first-order formal number theory will do), and SM a first-order sentence that (under a usual interpretation) says that there is a Mersenne prime p such that M57885161<pSM or S->¬SM probability 1.

        Moreover, given my cognitive limitations, I reckon it would be epistemically irrational on my part to assign probability 1 to either statement. Yet, that would not warrant a conclusion on my part that neither of them is a theorem of classical logic.

        The following argument might provide further intuitive support:

        Let’s say that there is a TV contest (or something like that), in which they toss a coin, and they offer me the following bets (I wish!):

        a. If the coin lands heads, they offer me to bet $0.1 vs. $1000000 against S->SM (i.e., if it’s false, I win $1000000. If it is true, I lose $0.1).
        b. If the coin lands tails, they offer me to bet $0.1 vs. $1000000 against S->¬SM.

        It seems clear to me that it would be rational on my part to take the bet they offer, regardless of whether the coin lands heads or tails. That supports the conclusion that it’s rational of me to assign probability greater than zero to ¬(S->SM), and also to ¬(S->¬SM). But that still does not warrant the conclusion that neither S->SM nor S->¬SM is a theorem of classical logic.

        Another argument is based on the flexibility you suggest for mathematical statements, like 1+1 =3. Presumably, that flexibility includes assigning probabilities strictly between 0 and 1 to a statement asserting that a certain number is a prime. For example, we may consider a list of probable primes that have not yet been established as primes or composite numbers. Let n2 be a number in one such lists (we can pick an actual, specific number if we want to, e.g., (2^13372531+1)/3).

        It seems to me it’s rational to assign probability strictly between 0 and 1 to the assertion that n2 is a prime (do you agree? It seems consistent with your point about 1+1=3).
        Let S(n2) be a first-order sentence that (under a usual interpretation) says that n2 is a prime. Isn’t it also rational to assign probability strictly between 0 and 1 to S->S(n2), and to S->¬S(n2)?
        It seems to me that it is (if not, I’d ask why not?), though that does not give us any good reasons to think neither S->S(n2) nor S->¬S(n2) is a theorem of classical logic.

        If the use of first-order logic is a difficulty for this line of objections – though I do not see why that would be so -, I would suggest similar objections based on second-order formal number theory instead.

        P.D.: thanks for the clarification on “most” of the closest worlds; I should have realized that.

        Best,
        Angra

        November 24, 2016 — 19:41
      • Angra Mainyu

        Hi Yoaav,

        I’ve been thinking about the flexibility argument you give, and after further reflection, in my assessment the claim you make about tautologies – and a closely related one; more below – greatly limit that flexibility.
        In particular, in the paper you say that the probability of theism on any set of propositions that entail theism is 1. Here, the idea seems to be that if A->B is a tautology, then the probability of B on A is 1 (at least, assuming neither has probability zero). But this entails that if A entails B, then the probability of A is no greater than the probability of B.
        Another way to establish this is from your claim about tautologies and Bayesian reasoning: let’s say that A->B is a theorem of classical logic. Then, ¬(¬B&A) is also a theorem of classical logic, so P(A&¬B=0).

        We get:

        P(A)=P(B&A)+P(¬B&A)=P(B&A)≤P(B&A)+P(B&¬A)=P(B).

        So, whenever A->B is a tautology, P(A)≤P(B).

        That’s suggests very a significant restriction to the probabilities you can properly assign- and, I think, a serious restriction for how cognitively limited agents can even use Bayes theorem, and probabilities.
        For example, it follows from the axioms of formal number theory (either first-order or second-order) that it’s not the case that 1+1=3 (i.e., ¬(1+1=3) follows).
        Hence, if your claims above (and the reasoning behind them) hold, one may not assign to 1+1=3 a probability greater than the probability that at least one of the axioms of formal number theory is false.
        It seems to me the probability that at least one of the axioms is false is no greater than 0.001 (I’d say it’s far smaller than that, but I’m going with a high upper bound to be conservative). That’s the probability we ought to assign, or any reasonable probability assignment, etc.; in other words, if you like, the conclusion is that we ought not to assign more than 0.01 to the hypothesis that at least one of the axioms is false (if you disagree, please let me know).
        Based on that, we conclude that the probability that 1+1=3 is no greater than 0.001.
        Granted, that’s fine when it comes to 1+1=3: it would be absurd to give 1+1=3 a probability greater than 0.001.
        Yet, when it comes to statements that are not at all easy to check, such as whether a number is a prime, this leads to (in my opinion decisive) difficulties, for the following reason: it seems that if a number n is prime, it (very, very probably!) follows from the axioms that it is prime, whereas if it is not, then it follows from the axioms that it’s not.
        Hence, it follows that for any given number n that we consider (where n is supposed to be actually identified), the probability that n is a prime is either no greater than 0.001, or no less than 0.999. But that’s not only counterintuitive, but also it’s a problem for probabilistic primality checking algorithms that yield different probabilities than that, at least as long as the probabilistic assignments given by those algorithms are taken to be good grounds for epistemic probabilities (which they usually are, and I think with good reasons).

        A similar difficulty arises when it comes to many other mathematical problems, such as computing the digits of pi.
        In order to compute them, we start with some (extremely probable!) statements, and from that, we (or rather, computers) do a lot of work, and figure what the n-th digit of pi is, for some fixed n. But then, by an argument such as the above, the probability that a certain n-th (with n fixed) digit of pi is j (for j between 0 and 9), would be either no greater than 0.001 or no less than 0.999.

        But that would have very weird (i.e., extremely improbable!, in my view) consequences as well. There is an example like this on lesswrong (in an article entitle “Probability is subjectively objective” ).
        As in the example (modified due to progress in computing the decimal digits of pi, and the probabilities assessed above), it seems clear that it’s a very good bet to bet $1 to win $500 that the 200 trillionth decimal digit of pi is not 0.
        Yet, if the probability that the 200 trillionth decimal digit of pi is 0 were no less than 0.999, it would not a good bet (at least, as long as one ought to assign the right probability). Hence, it’s not the case that the probability is no less than 0.999, so the probability is no greater than 0.001.
        We can make the same argument taking 1 instead of 0, then 2, then 3, then 4, etc., and the conclusion is that the probability of each of them is no greater than 0.001. So, the probability that the 200 trillionth decimal digit of pi is an integer between 0 and 9 is no greater than 0.001+0.001+…+0.001=0.001*10=0.01, which is absurd.

        It follows that there is some integer j between 0 and 9 such that it’s not a good bet to bet $1 to win $500 that the 200 trillionth decimal digit of pi is j. But that is strongly counterintuitive.

        Alternatively, one might conclude that the probability that we ought to assign to some hypothesis H is not the probability of H (given some information, etc.), but then, we’re no longer talking about *epistemic* probabilities when we talk about probabilities, it seems to me, but about some other sort of probability. But that seems incompatible with the arguments in the paper.

        Best,
        Angra

        November 26, 2016 — 11:41
  • Hi Tim:

    Re: 6.1, yes, what you say is basically what we say toward the end of 6.1, namely: “It’s *hugely* less likely that Susan would shout these particular words and trash the office conditional on her having a non-embezzling reason for her actions than conditional upon her having an embezzling reason for her actions”; and as such, Tom should judge that probably Susan has discovered his embezzling. Unfortunately, CORNEA blocks Tom’s inference to that judgment (applied holding experiences fixed, the same way many use CORNEA when assessing whether in a world like ours a God could have reasons for permitting evils).

    Re: your reply to our argument in 6.2, I guess I would say this (though maybe John or Yoaav would reply differently). Fans of CORNEA are putting forth a *general* principle, which because it is general applies beyond the cases of interest (Rowe cases, doctor/needle cases). If a general formulation allows for cases where it fails to do what is wanted in the cases of interest, then the principle is problematic. Your impulse is to come up with more principles or to refine the principle; ours, in this matter, is to abandon those principles in favor of a Bayesian analysis.

    “At best, it just shows that we may need more than one principle to block the inferences we regard as problematic. What features of your example do you think make it different from this [purportedly analogous] one involving Abe and Beatrice?” One difference is that we *agree* with the principle that knowledge entails truth, whereas we don’t agree with the general formulations of CORNEA. Another is that the inference in our 6.2 case is structurally similar (if not identical) to how the principle applies to the cases of interest: it’s just a specific instance falling under the general principle. But in your Abe and Beatrice case, the inference is not structurally similar; a case of true belief won’t be relevant to a principle ruling out false beliefs from being knowledge. So dialectically, posing this analogy seems, to me at least, inapt.

    November 19, 2016 — 14:21
    • Tim Perrine

      Hi Matt,

      I’ll make this my last comment as I don’t want to monopolize your time.

      I think your example is very similar to the Abe/Beatrice case. Here’s why. By definition, E is levering evidence for H only if P(H/E) ≥ P(H) + .5. By Bayes’ Theorem, P(H/E) ≥ P(H) + .5 only if P(E/H)/P(E) ≥ 2. Now P(E/H)/P(E) ≥ 2 only if certain conditions are met. One condition is that P(E/~H) ≤ .5 But there are other conditions as well. For instance, P(E/H)/P(E) ≥ 2 only if it is not the case that P(E/H) = P(E/~H). (I’ve omitted the calculations, but I’m sure you can figure them out.)

      Now in the case you’ve given the probability of the doctor having the visual experience he does (D), given there is a virus on the needle (V) is very high as is the probability give there is no virus on the needle (~V). To simplify, say that probability is .95 (i.e. P(D/V) = P(D/~V) = .95). Let the dice coming up three be ‘T’. You point out that P(T&D/~V) is below .5. Assuming that P(T) = .17, we can apply the chain rule to get that P(T&D/~V) is (roughly) .16, well below .5. However, by the same exact reasoning P(T&D/V) is .16. Since P(T&D/~V) = (PT&D/V), P(T&D/~V)/P(T&D) could not be greater than 2. And so T&D couldn’t be levering evidence for ~V.

      Returning to CORNEA, it says that it is reasonable for you to believe that E is levering evidence for H only if it reasonable for you to believe that one necessary condition for E’s being levering evidence is met, namely, that probability of E given ~H is .5 or below. The dice case is a case where one condition on being levering evidence is met (and it is reasonable to believe so) but another condition on being levering evidence is not met (and it is reasonable to believe it is not met).

      This is why your example seems to me very similar to the Abe/Beatrice case. CORNEA was (more or less) designed with one particular condition on something being levering evidence in mind—just as the initial principle concerning knowledge was designed with one particular condition on being knowledge in mind (truth). Your example is an example where that necessary condition is met, but some other necessary condition is not—just as the Beatrice example is a case where one initial conditional is met (truth) but some other necessary condition is not. Thus, to complain that the initial principles “can’t do what they are supposed to do” strikes me as wrong. They were crafted with one particular necessary condition in mind. One might point out that there are examples of inferences they don’t block, where the necessary condition they are meant to track is met. But that only really shows that the necessary condition they are meant to track is not a sufficient condition, which everyone already agrees on (so far as I can tell).

      Of course, all this assumes the notion of levering evidence and ignores the usual “slippage” that occurs when we apply formal work to real life cases. One might raise serious worries on both fronts. But as I said I didn’t want to monopolize your time, I’ll just leave things there.

      Best,
      Tim

      November 20, 2016 — 9:30
      • Hi Tim,

        Sorry, I commented lower down thread than replying properly to your comments above; whoops.

        I think at some level we’re talking past each other. Our criticisms of CORNEA weren’t aimed explicitly at a levering version of that principle. In reply, you keep reverting to a levering notion of CORNEA; but we take ourselves to have discredited levering notions in section 9.

        You asked me for features that make our case different from your Abe & Beatrice case; I give you two features. You then point out similarities, and revert to levering CORNEA. But to me, those aren’t relevant because (i) I think of the distinguishing features to be decisive, and (ii) again, we’ve criticized levering principles. What is more, we have independent motivation for multiple necessary conditions on whether one knows (and there is widespread agreement in epistemology on those matters). Yet we don’t have independent motivation for, or widespread agreement on, the use of ‘appearance’/CORNEA principles.

        You might be able to respond to our criticisms and so defend appeal to such principles. But we still insist that you can do what is wanted much more simply using just Bayesian probabilities.

        November 20, 2016 — 11:43
      • John Hawthorne

        Hi Tim,

        Let me just say a few things to orient you to how I (and, I suspect, my coauthors) are thinking.
        (A). The discussion of Cornea principles has often been couched in counterfactual lingo (as with sensitivity principles). So contrued, the principles are likely to produce all sorts of distractions. If I am confident that I am conscious, its neither here nor there that were I not conscious, there would be no difference discernible by me at the counterfactual world. Suppose there are schmiruses, some visible some invisible and suppose I am confident that there is no schmirus on the needle (perhaps cos schmiruses are quite rare). In figuring the relative likelihood of my experience given that there is a schmirus on the needle s vs given there is no schmirus on the needle, I shouldn’t get distracted by thoughts like (i) “the most similar schmirus-on-the-needle-in-front-of-me worlds to the actual world are ones where there is one invisible schmirus since that is a smaller change viz a viz actuality than adding large visible schmiruses” and (ii) “if there were schiruses, it is overwhelmingly likely that some aspect of my experience would be different — after all, what are the chances things would be experientially exactly the same were some actually false P true?” Moroever, counterfactuals are notoriously context-sensitive according to “what one holds fixed” — witness different ways of evaluating “If Julius Caesar had been a modern-day general…” Figuring out what to think about schmiruses does not involve having to worry about what to hold fixed viz a viz certain counterfactuals. All these counterfactual issues are clutter. What matters to evidential impact of certain experiential facts is the likelihood of the experience given schiruses and the likelihood given no schmiruses (ie various conditional likelihoods). And the resulting likehood is a matter of this in combination with the priors. (Of course we are both aware that one’s reasonable views about likelihoods may be mismatched with the likelihoods)
        (B) I suspect you agree that the counterfactual reasoning is a distraction (I have some recollection of remarks to that effect). And no doubt there is a regimentation of the levering evidence principle, construed in terms of conditional probabilities, that makes it more or less tantamount to a theorem of the Bayesian probability calculus. But I think the language in which the principle is couched is not very helpful. The lingo of levering evidence seems to articulate one aspect of phenomena can be understood in a much more general way in terms of Bayes factors. Why relabel phenomena that are already well understood within much more powerful and general models? Moreover, the detour through the language of ‘appearing’ is a distraction. I worry that philosophy of religion students risk picking up on the Cornea debate at the expense of learning probabilistic mechanics. Think back to the schmiruses case. You can learn everything you need to know here by getting the feel of the impact of various expected base rates in combination with various Bayes factors. The language of ‘epistemic appearing” is just more clutter. If one want to get students to think properly about schmiruses (or some worldly analogue), one needs to do such things as explain the base rate fallacy. In sum: In the paper we are trying to provide a framework that better helps people think clearly about the probabilistic impact of facts about evil. The Cornea literature is riddled with distractions that can be bypassed if one instead works within the framework that we recommend. When there is a principle that is false on some precisifications and reinvents the wheel on others, there is little point in proceeding by way of straightforward counterexample. Moreover it is no surprise that as smart philosophers defending Cornea add further precisifications and clarifications, they slowly move in the direction of “reinventing the wheel” construals. (Of course thats a very tendentious way of putting it, but it at least orients you to my point of view! Sorry if the above is a bit quick and messy.)

        November 20, 2016 — 17:00
  • Hi Matt,

    Really nice paper! I have a question about the first substantive point you try establish.

    In section 3, you argue that evil is evidence (of whatever strength) against God: P(~G/E) > P(~G). You say: “if the absence of evil is evidence for God, then the presence of evil is evidence against the existence of God, and it is misleading for skeptical theists to claim otherwise.” I will here assume that you mean that it is misleading for skeptical theists to deny that evil is evidence against the existence of God, as opposed to the claim that it is misleading for them to deny the conditional you have stated. Skeptical theists are of course perfectly happy with the conditional. The problem is that this conditional is obviously not enough to establish what you want. Your argument in this section, in full, goes like this:

    (G: There is a God / E: There is evil)

    1. P(G/~E) > P(G) iff P(~G/E) > P(~G)
    2. P(G/~E) > P(G)
    C. P(~G/E) > P(~G)

    The crucial premise here is premise (2), not (1), and yet you don’t really defend it anywhere. I’m worried that this is more significant than you seem to think.

    You mention a host of skeptical theists willing to say that (3) P(E/G) = P(E/~G); presumably, you want this argument to work against them (I’m thinking here of Bergmann, Howard-Snyder, Otte, and Stone). But it seems clear to me that anyone who accepts (3) would also accept (4) P(~E/G) = P(~E/~G), and for the same exact reasons they offered for (3). Yet (2) is false if (4) is true (I’ll omit the argument, but I think you’ll agree). So your argument for the claim that evil is evidence against God relies on a premise that those who disagree with you already reject. That’s not great.

    So perhaps you could share with us a bit more of your reasons for accepting (2). Against (3), I see that you say “Intuitively, the probability of there being evil given atheism is higher than the probability of there being evil given theism.” But clearly the skeptical theist disagrees. So is there anything you can offer against (4) other than a parallel intuition that also won’t be shared by the skeptical theist? (The parallel intuition here would be the claim that there is what you call “the problem of paradise.”) Otherwise, your argument for P(~G/E) > P(~G) in this section seems pretty weak. The argument could be sound for all I’ve said, of course, but it shouldn’t move anyone who is already willing to accept (3). (Worse: it may even beg the question, depending on how closely related we think (3) and (4) really are.)

    Once again, I really enjoyed reading this paper. Thanks for the already substantive Q&A!

    November 21, 2016 — 16:31
    • Matt Benton

      Hi Luis,

      Thanks for your comments and query.

      Our argument centers on an intuitive comparative probability judgment about the paradise case, namely that Pr(~Evil|God) > Pr(~Evil|~God)… and we suspect that many skeptical theists will share that judgment. From that judgment, your (2) [i.e. Pr(G/~E) > Pr(G)] follows (as we note on top of p. 4).

      It’s actually not clear to me how many skeptical theists are willing to hold, or argue for, what you label (3), namely, Pr(Evil/God) = Pr(Evil/~God). This is because some of them express radical uncertainty about the prior probabilities, as we discuss in section 7: they shrug their shoulders and insist that we don’t (or shouldn’t) know what to say about the probability of evil | theism (see pp. 17-18 of the linked version). And presumably the kind of radical uncertainty they’re gesturing at will also extend to identity claims like (3). Given this, it’s not clear to me that the kinds of reasons they give for resisting (C) are the kinds of reasons that support either (3) or (4).

      November 21, 2016 — 18:32
      • Hi Matt,

        Thanks for the reply. I am taking you as saying:

        (a) Support for (2) comes from it being “intuitive” that (4) is false.
        (b) Support for (2) also comes from (3) being unmotivated: (3) seems to depend on a claim about radical uncertainty that doesn’t really offer any support for it.

        Am I correct?

        Reasons such as (a) are precisely the kind that I was worried about. I don’t think they carry much dialectical water, other than highlighting fundamentally different starting points. (Nothing wrong with that, of course!) At any rate, I think (b) is more interesting. Here, I was thinking of the principle of indifference (PI) as the bridge from our limited epistemic position to (3) and (4). And I don’t quite see how your worries in section 7 raise any worries for it. You say: “If one’s uncertainty about the prior probabilities for evil and theism led one to entertain all possible prior probability assignments about them or to entertain none at all, then it would be genuinely unclear what import evil had for theistic belief.” I think that’s true, and it might even be problematic for the skeptical theist in the ways you point out in that section. But it doesn’t follow from any of this that (3) or (4) is undermined: they assign no probability whatsoever to the existence of God; they depend on no particular probability at all being assigned. The argument from PI to (3) and (4) turns on the more modest claim that radical uncertainty makes a certain proposition as probable as its denial. I don’t see any reason you offered to resist that.

        Of course, there are reasons to complaint about PI. But that would make the quick argument you sketched in section 3 much more complicated than what you present.

        All the best,

        November 21, 2016 — 19:10
        • Matt Benton

          Hi Luis,

          Whether (a) carries much dialectical water will depend on how many skeptical theists (and neutral parties) agree with our comparative probability judgment; if, as we suspect, many of them find themselves with that same starting point as us, it will carry a good deal of dialectical water. You are right, of course, that if skeptical theists disagree with that comparative judgment, that we are at an impasse.

          I was not claiming (b), but rather that the kind of radical uncertainty expressed by some skeptical theists (e.g., the kind that “entertains none at all”) seems to go well beyond your ‘more modest claim,’ and thus seems to sit uncomfortably with (3) and (4), and with the Principle of Indifference. Maybe I’m misreading them, but when e.g. PvI says that “we don’t know what to say about the probability of [the amounts, kinds, and distribution of suffering] on theism,” I take him to mean we cannot appeal to PI and call it 0.5.

          I’m sure Yoaav has a few better things to say on this; maybe he’ll chime in.

          November 21, 2016 — 19:44
          • Yoaav Isaacs

            Hi Luis,

            I’m not sure I have better things to say, or even that I have a few things to say, but here goes.

            I’m not convinced that one can offer proper arguments about probability assignments. You can highlight a certain aspect of a hypothesis in a way that might move someone to change their mind about it, but that doesn’t quite feel like having them in the iron grip of reason. We think that evil is evidence against the existence of God in a very mundane way. Evil seems pretty weird given the existence of God –– I wouldn’t have expected horror and anguish in a world ruled by a loving deity. Evil seems much less weird in a world that just somehow came about without God –– horror and anguish don’t seem particularly out of place. Someone else might well disagree. Someone else might well say that they think horror and anguish are equally likely given a loving deity or something like naturalism. Someone else might even say that they think horror and anguish are more likely given a living deity than given something like naturalism. Such a person strikes us as being really weird. It’s worth noting that the problem of evil was once felt to be so strong that philosophers indulged in the silliness of the logical problem of evil. While calling the problem “logical” is over-egging the pudding, it also seems wrong to think that there’s no egg there at all.

            Best,
            Yoaav

            November 24, 2016 — 1:47
  • A good piece of philosophy.

    Although not central to your paper, I found myself questioning the application of this line to my own mind: “Intuitively, the probability of there being evil given atheism is higher than the probability of there being evil given theism.” I confess the opposite seems so to me: that there is evil strikes me as more likely on theism than on atheism. (I don’t see how a background belief in atheism or theism would affect this judgment.)

    Perhaps my judgment above relates to my assessment of the role of the absence of Eden as evidence for atheism. The non-actuality of Eden doesn’t by itself entail that there is evil, for without Eden, there might be nothing at all. But if we are supposed to think that the probability of theism is higher on the Eden world than on a world with evil in it, then I find myself thinking, “that’s not plausible”. For the sake of rapport, I wish I could be more concessive. (Perhaps I can establish rapport in other ways…)

    On a more substantial note, suppose one thinks that evil, or particular instances of suffering, is evidence for atheism. Then can’t one infer that the absence of particular instances of evil — such as the absence of a suffering fawn in my living room — is evidence for theism? I bring this up to suggest that being more concessive than I am above doesn’t automatically result in one’s having a stronger case for atheism, since it could result in one’s having additional weights on the side of theism as well.

    November 21, 2016 — 21:13
    • Yoaav Isaacs

      Hi Josh,

      I absolutely share your aprioristic sensibilities. My ur-prior is also dominated by bare nothingness, and it continually amazes me that anything exists at all. There is therefore a very reasonable sense in which I think that evil makes for a tremendously strong argument for the existence of God, just not as strong of an argument as the existence of dust, or shoes, or light, or trash, or anything else. It is, however, customary not to talk in terms of those thoroughly aprioristic probabilities. I am, for example, happy to say that I do not have tremendously strong evidence that chipmunks control the Federal Reserve. I say this even though I have rather striking evidence that the world does not consist merely of bare nothingness, and further evidence for both the existence of chipmunks and for the existence of the Federal Reserve. The idea is, I think, to relativize claims about what evidence one has to a certain background. In the case of the problem of evil I think that the background can be very modest indeed –– perhaps consisting only in the negation of bare nothingness.

      Best,
      Yoaav

      November 24, 2016 — 1:56
      • Josh

        Thanks, Yoaav. That’s well put. (Of course, more could be said about how things may come out on various background conditions, but I’d rather not distract from your main point — which is well put.)

        November 28, 2016 — 19:33
  • Leave a Reply to Yoaav Isaacs Cancel reply

    Your email address will not be published. Required fields are marked *