Inscrutable evils
June 29, 2010 — 9:44

Author: Alexander Pruss  Category: Existence of God Problem of Evil  Tags: , , ,   Comments: 23

Let:
T = theism
N = naturalism
E = there is some evil
I = there is inscrutable evil

I have little direct intuition about P(I|T).  I have to actually calculate.  Start with:
P(I|T) = P(I|E&T)P(E|T).

Now, what is P(E|T)?  The kind of evil we have the best reason to expect a priori is bad free choices by significantly free persons (SFPs), understood in the libertarian sense.


P(God creates SFPs | T) = P(God creates SFPs | T & SFPs are creatable)
P(SFPs are creatable | SFPs are possible & T) P(SFPs are possible |
T).

Now, given that God isn’t a SFP (in the libertarian sense), and given that God if he exists, exists necessarily, I suppose P(SFPs are creatable | SFPs are possible & T) = 1.  After all, given theism, a being other than God is possible iff it is creatable.  And P(God creates SFPs | T & SFPs are creatable) is moderately high.  Not too high, because there is a not too low probability that God would create nothing, since the only-God world is very valuable–it exhibits the values of simplicity, unity, beauty and lack of local flaws to an extraordinary degree.  But given that God creates something, that God creates SFPs is fairly likely.  Maybe the probability that God creates something given that SFPs are creatable is about 1/2 to 3/4, maybe closer to 1/2.  Let’s say it’s half way: 0.625.  And the probability that God creates SFPs given that he creates something is very high, maybe 0.9.  So:

P(God creates SFPs | T) = 0.5625.

Now, P(E | God creates SFPs) is slightly more than P(an SFP sins | God creates SFPs).  Why only slightly?  Because it’s plausible that evils in a world where nobody sins would be unjust or that God would be the author of them in an objectionable way.  Maybe: P(E | God creates SFPs) = 1.2 P(an SFP sins | God creates SFPs).  So, approximately:

P(E | T) = P(someone sins | God creates SFPs) P(God creates SFPs | T) / 1.2 = 0.46875 P(someone sins | God creates SFPs).

Now, what is P(someone sins | God creates SFPs)?  Given that probably God would create several SFPs if he created at least one, it’s not too unlikely.  But God might create SFPs that are very unlikely to sin.  I think such SFPs would probably count as less free, and their free choices would likely be less valuable, but it’s a possibility not to be dismissed.  So, maybe P(someone sins | God creates SFPs) = 0.75.  So, approximately:

P(E | T) = 0.35

So:
P(I | T) = 0.35 P(I | E & T).

Now my intuition is that it is moderately likely that if E, all the evils are either sins or punishments for sin or results of sin needed for freedom, and all of these are scrutable.  So P(I | E & T) is not too high.  So, P(I | E & T) is about 0.3.  So:

P(I | T) = 0.11.

Trent in an earlier post said P(I | T) is not too low, and I guess this is support for that.

Now, let N = naturalism.

Then:

P(I | N) = P(I | E & N) P(E | N).

Plausibly: P(I | E & N) is very high, I’d say 0.999 or higher.  (Unless I is taken to entail that there are beings that can discern.)  So, approximately:

P(I | N) = P(E | N).

Now, P(E | N) is approximately equal to P(E | conscious beings & N) P(conscious beings | N).

Next, approximately:

P(E | conscious beings & N) = P(pain or SFPs | conscious beings & N &
values) P(values | conscious beings & N),


where values is the claim that there really are values (which is entailed by E).  Now, P(values | conscious beings & N) is approximately P(values | N), we may suppose.  While P(values | T)=1, since God is by definition good, I think that P(values | N) is no more than 0.5 (think of the plausibility of non-realist accounts of value given N).  So:

P(E | conscious beings & N) = 0.5 P(pain or SFPs | conscious beings & N & values).

If there are values, it’s very likely that damage to the body is bad.  And there are clear benefits to being aware of damage to the body qua bad, etc.  So pain seems likely if damage to the body is possible.  However, the concept of damage to the body depends on the concept of proper function.  Now P(proper function | N & values & conscious beings) is bigger than P(proper function | N), but it’s still not that high.  (Maybe the only values are elegance of laws.)  The naturalistic accounts of proper function all fail, I think (and I have arguments for it).  On the other hand, proper function is very likely needed for consciousness–consciousness needs content, and indicator theories of content need proper function (this is a substantive claim, but I can defend it).  So:  


P(proper function | conscious beings & N) = 0.8.

So, we might approximate:

P(proper function | conscious beings & values & N) = 0.9.

And P(pain | proper function & conscious beings & values & N) is close to 1.

So, approximately:

P(E | conscious beings & N) = 0.45.

Now, what is P(conscious beings | N)?  Well, the best stories about consciousness and content (let’s suppose consciousness is contentful–unless pain is indicative to damage, it’s unlikely to be bad) require proper function.  So maybe:

P(conscious beings | N) = 1.25 P(conscious beings | proper function & N) P(proper function | N).

Now, P(conscious beings | proper function & N) is moderately high, but not too high.  From indicators to consciousness is still a leap: we might think we could imagine beings with content and no consciousness, and it’s not clear that proper function and indicators are enough for consciousness.  Moreover, we still need an evolutionary process leading to representational states complex enough for content, though a multiverse might take care of that (we also need organisms, but P(organisms | proper function & N) is close to 1–see argument later).  So, maybe, we can let P(conscious beings | proper function & N) =
0.7, or more likely less.  So:

P(conscious beings | N) = 0.875 P(proper function | N).

Now, plausibly, if there is proper function, there are organisms that exhibit it.  It’s hard to imagine, given N, any place for proper function that in organisms or in artifacts made by organisms.

Now, there are two plausible stories about proper function: the Aristotelian story and the evolutionary.  I shall suppose the two are mutually exclusive.  If the Aristotelian story is true, we’re very unlikely to get proper function in organisms given N.  We would need proper function transmitting laws, or something weird like that.  


So:

P(proper function | Aristotelian account & organisms & N) = 0.05

What about the evolutionary account?  I think:

P(proper function | evolutionary account & organisms & N) = 0.95.

This is less than 1, because if there is a multiverse, there might be non-evolved organisms, and there might be problems with making sense of the probabilities that evolution requires given a multiverse.

And maybe there is some other account?  Accounts in terms of agency aren’t going to work given N, since agency presupposes content and hence proper function given N.  Let’s give a probability 0.1 to “some other account”.

Now, the evolutionary accounts of proper function have really, really serious objections.   So, at most:

P(evolutionary account | organisms & N) = 0.25.


So, overestimating:

P(proper function | organisms & N) = (0.05)(0.65) + (0.95)(0.25) + 0.10 = 0.37.

Now, P(organisms | N) is high if there is a multiverse and low otherwise (fine-tuning intuitions).  How likely is there to be a multiverse?  Maybe fairly high, maybe not.  One reason to think it’s fairly high is that if N is true, then PSR is false, and if PSR is false, we expect to see all sorts of weird stuff coming into existence ex nihilo. On the other hand, maybe there has to be an all-encompassing spacetime, and maybe it’s of limited size?  But perhaps the naturalist can’t grant that if N is true, we expect to see all sorts of weird stuff coming into existence ex nihilo, as that would undercut science.  Moreover, there
is a pretty good chance there’d be nothing if N.  So: 


P(organisms | N) = 0.05 P(~multiverse | N) + 0.99 P(multiverse | N).  

Suppose:
P(multiverse | N) = 0.6.  

(Probably less as P(nothing | N) seems high.)  

So: 
P(organisms | N) = 0.614.

So at most:

P(proper function | N) = P(proper function | organisms & N)

P(organisms | N) = (0.37)(0.614) = 0.28.

So:
P(conscious beings | N) = 0.245.

So:
P(E | N) = (0.45)(0.245) = 0.11.

And so:
P(I | N) = 0.11.

And that’s what we said about P(I | T).  So, inscrutable evil doesn’t provide a significant reason to believe theism over naturalism.

Honestly, I didn’t rig it.  That’s just how the numbers came out.  (But I could kind of see that my intuitions were leading that way.)  πŸ™‚  


Actually, if one does the calculations to three decimal places, one gets a slight difference (about 0.005) between P(I | N) and P(I | T), in favor of P(I | N), but I can’t claim enough precision in my intuitions above to make that difference count as significant.

On reflection, I’d change at least two estimates.  I’d raise the probability of values given N, and I’d lower the probability of evolutionary accounts of proper function.
Comments:
  • christian

    Alex,
    Quick question: How are you thinking about these conditional probabilities?
    I think the probability that I exist is 1. I also think that the probability that I exist given that you exist is 1. I Although I don’t think there is a God, I think the probability that I exist given that God exists is 1.
    From what you say above, I would expect you to think these assignments are wrong. Is that right and could you say why?

    June 29, 2010 — 11:22
  • This is the problem of old evidence, in the special case where the old evidence is certain. There had better be a solution to it! I don’t know the solution. My intuition is that we just consider prior probabilities after bracketing those things of which we are certain that we want to be able to use as evidence.
    I know next to nothing about how to get around old evidence. So here’s something half-baked. Consider this fact: A computer can engage in Bayesian “reasoning”. Indeed, that’s how a lot of spam filters work. But unless you actually feed the computer with the fact that it itself exists, it won’t assign probability 1 to its own existence. Indeed, typical spam filters don’t assign any probability to their own existence, and don’t even have a way of representing the proposition that they exist. So, just imagine us as such non-self-aware computers.
    There had better be ways of using our existence for evidence. Consider two scientific theories about our existence.
    Theory E: Evolution.
    Theory F: All the species in existence are formed by a single random permutation of 10^80 particles.
    Even without observing critters other than myself, I have good reason to prefer E to F simply because E makes it much more likely that a being like me would exist.

    June 29, 2010 — 14:43
  • Not to toot my own horn, but i have a paper on whether one’s own existence can be evidence.
    If you are interested it is at http://udel.edu/~jpust/CartesianKnowledge&ConfirmationWeb.pdf

    June 29, 2010 — 15:00
  • Brad Sickler

    Interesting thoughts – thanks. I’ve never been big on using Bayes’ theorem in philosophy, but it can be fun nonetheless. Intuitions are fun too, especially since everyone’s are different. You say, “I have little direct intuition about P(I|T).” My intuition is that the probability of inscrutable evils given theism is 1 minus an infinitesimal. I say this because when I think about the nature of God as an unlimited being, and myself as a finite being with extremely limited resources, I would fully expect there to be things about the world that seem utterly wrong to me. Not just unpleasant things – I might expect those too – but I mean things that seem like they should not occur if God is really who we think he is.
    To suppose otherwise is to suppose that for every event in the universe’s history, it would be my judgment that said event is what should have happened; that I would always see the purpose for everything that happens; that everything would be, as it were, scrutable. But on what grounds could I suppose that? So I approach it like this: either everything that happens will be something I deem fitting given God’s sovereignty and character, or some things that happen will not be as I think they should be. Next I compare my ability to judge what should be permitted with God’s ability to make those judgments, and I perceive that there would be a vast gulf between the two. That is, I would expect – quite often – to disagree with what God allows (or causes). But that’s just to say that I would expect there to be inscrutable evils, and probably lots of ’em. So my a priori ruminations tell me to expect inscrutable evils with a probability of almost 1.

    June 29, 2010 — 15:43
  • Brad:
    “To suppose otherwise is to suppose that for every event in the universe’s history, it would be my judgment that said event is what should have happened; that I would always see the purpose for everything that happens; that everything would be, as it were, scrutable. But on what grounds could I suppose that?”
    Well, there are a couple of ways for ~I to hold. For instance:
    1. God doesn’t create.
    2. God creates a world with no evil and no libertarian freedom.
    3. God creates a world with no freedom and the only evils being evil choices justified by the value of free will, or deserved punishments.
    I think 1, 2, and 3, each have non-zero and non-infinitesimal probabilities, given T. Hence, P(I|T) is non-infinitesimally less than one.

    June 29, 2010 — 17:24
  • Sorry, 3 should be:
    God creates a world with freedom and the only evils being evil choices justified by the value of free will, or deserved punishments.

    June 29, 2010 — 17:25
  • Joel:
    Thanks for the reference. A very interesting paper.
    I am not sure about your claim in the paper that: “unlike the situation with almost all other cases of old evidence, it is impossible to be in an epistemic situation in which one rationally doubts (i.e. lacks certain knowledge of) one’s own existence or in which the epistemic probability of one’s existence is less than 1. E1 and E2 are such that a fully rational agent is necessarily certain of their truth.”
    What about cases of non-human animals? Maybe a cow knows that a certain kind of grass tastes good, but doesn’t know that she herself exists. It thus seems quite possible to have a Bayesian reasoner that has no self-concept. Granted, once one understands the question “Do I exist?” one understands that the answer is affirmative, but it might take some time before one understands the question. Does epistemic rationality require that one have a self-concept?
    Moreover, E2, the claim that I am conscious, is clearly a claim I can lack certain knowledge of. When I am in deep sleep, I do not have certain knowledge of the proposition that I am then conscious, because that proposition is false, and one only knows what is true. Nor am I failing to be rational in failing to believe that I am conscious, if in fact I am not conscious. On the other hand, if there can be non-occurrent knowledge, I do rationally believe many other things while in deep sleep, such as that Ottawa is the capital of Canada is and that consciousness is a mental state.
    Plausibly, there can be non-conscious Bayesian reasoning, and non-self-conscious but conscious Bayesian reasoning. While one is conscious but not self-conscious, one might not realize that “I am conscious” is true, until one turns one’s mind to the question, at which point one typically becomes self-conscious. πŸ™‚ Nor is it irrational to fail to believe “I am conscious” prior to turning one’s mind to the question.
    I am also not convinced by: “if one has (partial) beliefs about anything, then one exists and so a failure to fully believe that one exists is clearly a defect ‘internal to’ one’s belief system.” There is a step left out, I think. It follows from one’s having partial beliefs about horses that one has partial beliefs about horses. But it is not a defect internal to one’s belief system to fail to have the second order belief that one has beliefs about horses. (If it were a defect, we would embark on an upward regress. Rationality would require not only that I believe that I have beliefs about horses, but it would require that I believe that I have beliefs about beliefs about horses, and that I believe that I have beliefs about beliefs about beliefs about horses, and so on.) It seems to me that second-order beliefs result from introspection and/or self-observation (I may realize by observing my behavior that I don’t actually believe that something is bad for me), and to fail to engage in introspection and/or self-observation need not be any different than to fail to engage in sight. To fail to engage in sight is not a defect internal to the belief system.
    I am not convinced by the Dutch Book argument, because I think it proves too much. Consider the following proposition: <At some point in my life I am in a betting situations>. Any bet against this proposition is guaranteed to lose. But surely on Bayesian grounds I should not assign probability 1 to this proposition.
    The argument against the two-function approach seems to me to have a weakness. “If [the support function probabilities] are possible epistemic probabilities, then my arguments in Sections I and II would seem to show that the appeal to such logical or inductive probabilities will be unavailing in avoiding my conclusion.” But the arguments in Sections I and II don’t establish that a non-one probability to the proposition <I exist> is not a possible rational epistemic probability. On the contrary, if Bayesians are right, you quite rationally assign a non-one probability to the proposition that I exist. What the arguments Section I and II at most establish is that a non-one probability is not a possible rational probability assignment for me to the proposition <I exist>. But the intuition behind the two-function approach does not, I think, require that the support function be a possible probability assignment for me.

    June 30, 2010 — 12:34
  • christian

    Alex,
    I see. I didn’t really mean to raise the general problem of old evidence. I think I need to think more about whether I can state my worry without its answer depending upon having a solution to the problem of old evidence.
    What I really meant to be pushing was the distinction between objective and subjective interpretations of probability. Intuitively, Pr(I/T) has an objective chance that is just hard to figure out. However, intuitively, Pr(I/T) has a subjective probability that is extremely high. I’m not sure whether these two intuitions can be reconciled with other things I believe, like my belief that a variant of the Principle Principle (and its converse) is true. With that said, I just need to think more about it.
    In any case, the probability we assign to (I/T) seems to me to be immaterial. I think it’s hard to assign any probability to this claim with any degree of confidence. But, I do think that, the problem is not about whether there are inscrutable evils. The problem is about what probabilities we should assign to claims like “there is a justifying reason R for evil E given that E is inscrutable”. Here, if the probability is low, then one can run the argument from evil while “happily” accepting that the evils around us are entirely inscrutable.

    June 30, 2010 — 14:06
  • Interesting. Two main areas of comment:
    (1) Naturalism and value.
    You write:

    P(E | conscious beings & N) = P(pain or SFPs | conscious beings & N &
    values) P(values | conscious beings & N),

    where values is the claim that there really are values (which is entailed by E). Now, P(values | conscious beings & N) is approximately P(values | N), we may suppose. While P(values | T)=1, since God is by definition good, I think that P(values | N) is no more than 0.5 (think of the plausibility of non-realist accounts of value given N).

    Whether E entails “that there really are values,” where ‘values’ is incompatible with non-realism about value, depends of the sort of non-realism in question.
    Let’s say that I’m a non-cognitivist of some sort, where value statements are e.g., expressions of pro and con attitudes, and as such aren’t truth-apt. You ask me, “Tim, do you believe that the Holocaust was evil? Do you believe that suffering from advanced Alzheimer’s disease is evil?” and I’ll say “Oh yes, I definitely believe that both of those things are evil.” And I’ll also assent to E. Now, I might (in accordance with my own theory) believe that when I sincerely say “Suffering from advanced Alzheimer’s disease is evil” what I’m say is akin to “Suffering from advanced Alzheimer’s disease–boo! hiss! blech!” and isn’t strictly speaking true or false.
    The only sort of non-realism about value that’s obviously incompatible with E is some sort of error theory, where statements of the form “F is evil” are truth-apt, but are all false. But even here we need to be careful–many error theorists are error theorists about morality, e.g., about things such as torture being ethically impermissible, but not error theorists about welfare, e.g., about things such as torture being bad for the person undergoing the torture. So even J. L. Mackie could allow that it’s true that Alzheimer’s disease is evil. Do you think that the probability of an error theory of welfare on N is about .5?
    (2) The upshot of the argument.
    It occurs to me that, if this argument does go through, it shows that both Naturalism and Theism are fairly (although not overwhelmingly) unlikely given the existence of Inscrutable Evil. If both are on a par here, then neither is made less likely relative to the likelihood of the other, but this is compatible with both being rendered less likely compared to other alternatives–such as some form of non-theistic non-naturalism. Since a lot of your complaints about naturalism seem to center around worries regarding mental content, consciousness, etc., I take it that a Nagel-type position a Colin McGinn’s “mysterian” stance would count as non-naturalistic–they don’t try to explain consciousness in e.g., functional terms. So this appears to be a probabilistic argument from Inscrutable Evil to non-naturalistic non-theism. Is that right?

    June 30, 2010 — 14:28
  • Joel:
    Two more comments about your paper. Suppose essentiality of origins holds. Then that you exist entails that your parents, call them X and Y, had a child. But that X and Y had a child is clearly evidence for the claim that they weren’t clinically infertile (it is possible to be clinically infertile and yet have a child with medical help, but most people who have children aren’t clinically infertile, I assume). So something entailed by the proposition that you exist can be evidence for you. Or suppose that propositions don’t change in truth value. On this view, a presently tokened token of “I am conscious” expresses a proposition p that entails that I am conscious at 2:43 pm. But that I am conscious at 2:43 pm is clearly evidence for lots of things. For instance, it’s evidence that nobody hit me on the head really hard at 2:42 pm, that I didn’t take any strong sleeping pills at 2:25 pm, etc.
    Christian:
    One’s probability that some inscrutable evil has a justifying reason surely depends on one’s probability for theism. πŸ™‚ And if one works it all through, I bet P(I|T) will show up somewhere. Note that P(I|T) isn’t quite as inscrutable as you might think–my calculation seems to make sense.
    Tim:
    Granted, the non-cognitivist utters “The Holocaust was evil”, but if non-cognitivism is true, then she does not express a proposition by her utterance. And this sort of non-cognitivism implies that there is no such proposition as E. In particular, there is no such evidence as E, since evidence is a proposition (in the relevant sense of “evidence”). I don’t quite know how to take account of this in my calculations, but I think the basic intuitions are right.
    One possibility is to take E to be the proposition <Dthe proposition expressed in present English by “There is evil” is true<. If we do that, then on cognitivist views, E holds iff there is evil, and on non-cognitivist views, E is false, if we stipulate that the Russellian line on “the” carries over to “dthe”.
    As for the upshot, I don’t think a probability of 0.11 is all that low. It’s more twice as high as the 0.05 that statisticians often use as a measure of significance. Suppose I toss two dice and get 5. P(5|dice are fair) = 0.11 (if my counting is right). But “I got 5!” is not very impressive evidence against the hypothesis that the dice are fair. πŸ™‚
    Nor have I really done any calculations as to the probability of I on non-naturalistic non-theistic views. For instance, the probability of organisms having consciousness might be very small on non-naturalistic non-theistic views, because there is no reason to expect a correlation between the mental and the organic. And without a correlation between the mental and the organic, we may not have much reason to expect the minded beings to feel pain. (My reasons for thinking pain is somewhat likely on naturalism and on theism, is that organic beings are apt to suffer damage, and it is good for them to be aware of that damage, etc.) Moreover, on naturalistic and theistic views, we at least have some reason to expect consciousness: on naturalistic views, because of the likelihood the evolutionary processes would lead to complex information processing, and on theistic views, because God is likely to produce consciousness because consciousness is good. But on non-naturalistic non-theistic views, what reason is there to expect consciousness at all?
    Besides, I don’t think the non-naturalistic non-theistic views are very prima facie probable, because their non-naturalistic aspects ground very good design arguments for theism, and a view that consists of the conjunction of A and B, where A grounds a very good argument against B, is not a very prima facie probable. If so, then their priors may be so low that the probability raising, even if there is any, is not a big deal.

    June 30, 2010 — 15:05
  • Tim:
    And here’s an argument against value-realism on naturalism. On naturalism, the most promising accounts of mental content are causal in nature. Now, reductive theories of value (e.g., reducing value to evolutionary fitness, or reducing value to pleasure and lack of pain, vel caetera) are quite implausible–there are very good arguments against them. But irreducibly axiological states of affairs are causally inefficacious given naturalism, and hence, even if there were such states of affairs, none of our thoughts would be about them.
    Now, the causal thesis has some problems with mathematics. So the naturalist might extend the causal theory of content to an explanatory theory of content (perhaps the fact that 2+2=4 explains our belief that 2+2=4 because the mathematical fact explains the structure of some neurological states). But I don’t think this will help the naturalist with values.

    June 30, 2010 — 15:14
  • christian

    One’s probability that some inscrutable evil has a justifying reason surely depends on one’s probability for theism. πŸ™‚
    Indeed.
    And if one works it all through, I bet P(I|T) will show up somewhere.
    I think this is a bit tricky. I agree that P(I|T) *can* show up somewhere. I don’t think P(I|T) *needs to* show up anywhere.
    Note that P(I|T) isn’t quite as inscrutable as you might think–my calculation seems to make sense.
    I don’t share your intuitions about how to assign probabilities in the cases above. But I do agree with this:
    So, inscrutable evil doesn’t provide a significant reason to believe theism over naturalism.
    What I think is that if an evil is inscrutable, then we should think it equally likely that there is a good that compensates for it as not. There is a justifying reason for it only if there is such a good. I’m just rehearsing Tooley’s argument here. Although it’s a completely separate issue, I think that we have to get clear on how to assign priors when doing calculations like you have done above, before we can make headway. I also think that we need to get into Humeanism before we make headway. I doubt that goods can metaphysically require distinct evils, since I think Humeanism about necessary connections is extremely plausible. This drives the probability of the existence of justifying goods way down, I think.
    Of course, this will effect my judgment of P(I|T) since I think that the probability that theism is true is extremely low.

    June 30, 2010 — 16:26
  • One possibility is to take E to be the proposition
    Hmm. I’ll have to think about this more, but offhand this seems question-begging against non-cognitivism. How about the following:
    Take E to be the proposition If we do that, then on cognitivist views, E holds iff there is evil (on some ‘objective’ sense of appropriate/aptness where it is proper to make a statement that p iff p), and on non-cognitivist views, E holds if the utterer does in fact have the attitudes toward some things that statements of the form “X is evil” is supposed to express. (E.g., if I crinkle up my nose, point at a piece of pizza, and say “oh, yuck!” even though I don’t find the pizza disgusting in the least, my expression might be inapt, even though it isn’t false.

    June 30, 2010 — 16:36
  • Sorry–formatting snafu in the last comment. Please disregard/erase it. Here is what I meant to say:

    One possibility is to take E to be the proposition “Dthe proposition expressed in present English by “There is evil” is true”. If we do that, then on cognitivist views, E holds iff there is evil, and on non-cognitivist views, E is false, if we stipulate that the Russellian line on “the” carries over to “dthe”.

    Hmm. I’ll have to think about this more, but offhand this seems question-begging against non-cognitivism. How about the following:
    Take E to be the proposition “It is appropriate/apt to utter (in present English) the sentence “There is evil” .”
    If we do that, then on cognitivist views, E holds iff there is evil (on some ‘objective’ sense of appropriate/aptness where it is proper to make a statement that p iff p), and on non-cognitivist views, E holds if the utterer does in fact have the attitudes toward some things that statements of the form “X is evil” are supposed to express. (E.g., if I crinkle up my nose, point at a piece of pizza, and say “oh, yuck!” even though I don’t find the pizza disgusting in the least, my expression might be inapt, even though it isn’t false.

    June 30, 2010 — 18:52
  • Tim:
    First, whether your appropriateness proposition works depends on what one thinks about the norm of assertion. If one thinks the norm of assertion is justification, and if one doesn’t think that having a justification for a falsehood is itself an evil, then it could be appropriate to utter “There is evil” even though there is no evil.
    Second, I am dubious whether the naturalist can make any sense of the notion of a norm, and hence of appropriateness. I think norms rise and fall with proper function. πŸ™‚
    Third, and most importantly, one can always take the evidence that we in fact have and replace it with some consequence of it in such a way that the consequence won’t be evidence against some view that we want to save. But to do that is to discard some of the evidence. When we read about the medical experiments at Auschwitz, we get evidence for <There is evil>. From that, combined with some theory about appropriateness, we might later conclude <It is appropriate to utter “There is evil” in dthe current English language>. But to replace the evidence that we directly got with this consequence of the evidence is to concede too much to the non-cognitivist and to discard some of the evidence. Rather, we should take what we read as evidence that non-cognitivism about values is false, just as the observation that we have two hands is evidence that non-cognitivism about organic parts is false.

    July 1, 2010 — 10:07
  • Christian:
    “I doubt that goods can metaphysically require distinct evils”
    Clearly, there are types of goods that metaphysically require distinct evils. For instance, consider these types of goods:
    1. Withstanding torture in a righteous cause.
    2. Forgiving a wrongdoing that was done to one.
    3. Apologizing for a wrongdoing that one did.
    4. Refusing to obey a command that one was given and that it would be immoral to obey.
    For each of these goods, that the good occurred metaphysically requires an evil (torture, wrongdoing, wrongdoing, immoral command). If there is such a thing as analyticity, in cases 2 and 3 the entailment from the occurrence of a good of the type to the occurrence of a wrongdoing is analytic.
    Now, you might say that at least some of these types of goods fall under a more general type of good which does not entail an evil. For instance, 2 and 3 may respectively fall under:
    2′. Forgiving something that one believes to be a wrongdoing that was done to one.
    3′. Apologizing for what one thinks was a wrongdoing that one did.
    I am not sure, however, whether the descriptions in 2′ and 3′ capture all the value in 2 and 3. It seems more valuable to apologize for a wrongdoing that one did than for a wrongdoing that one didn’t do or for something that one incorrectly thinks is a wrongdoing.
    And, more seriously, 2′ and 3′ still entail the occurrence of an evil. For instance, 2′ entails that either a wrongdoing was done to one or that one incorrectly believes that a wrongdoing was done to one. (Even if one doesn’t think false belief is always an evil–I think it always is–one should think that incorrect attribution of wrongdoing is always an evil; more so if the attribution is done to a real person, but even if one attributes it to an imaginary person whom one then “forgives”.)

    July 1, 2010 — 10:17
  • christian

    Those are interesting cases. I *think* they can be handled by getting clear on the individuation conditions for states of affairs. The right account of the SOA’s in (1) – (4) will count them as overlapping, and hence, they will not be distinct. So these cases will be consistent with Humeanism.
    For example, both the bad and the good SOA in (1) will include the same ‘torturing’. So, they will overlap. Both the bad and good SOA in (2) will involve the same ‘wrongdoing’. So, they will overlap. Anyway, that’s the strategy. I’m also being a bit sloppy. I mean the Humean thesis to be restricted to basic intrinsic value and disvalue. I also would deny that any of (1) – (4) have basic intrinsic value or disvalue (roughly, because they are not fine-grained enough).

    July 1, 2010 — 12:20
  • Joel Pust

    Thanks for the perceptive comments Alex!
    I’ll think a bit more about the remarks above but the worry about the cases such that of essentiality of origin were what I intended to exclude by restricting the entailment to those which were a priori knowable (perhaps I used the phrase “a priori entailed” for that idea).

    July 1, 2010 — 13:02
  • Joel:
    I missed the a prioricity! Yes, that eats up both my essentiality of origins example and my example that my being conscious at t1 can be evidence (I am myself sceptical of a prioricity, but that’s another story).
    After posting my comment, I realized an interesting thing. I think my objection to your objection to the two-function way does apply in the case of <I exist>. But it doesn’t apply in the case of <somebody exists>. For while you might rationally doubt that I exist, if your arguments in Sections I and II work, nobody could rationally doubt that somebody exists. And most of the interesting applications involve the alleged evidence from <somebody exists>, so my point here wouldn’t be fatal to the applications of your position to fine-tuning, etc.
    However, I really do think Bayesian rationality doesn’t require having any beliefs about oneself. Let me expand on my cow story. Cows have all sorts of sophisticated intellectual abilities (as compared to, say, those of oysters), but we may suppose they’re not self-aware. We could easily imagine Bayesian cows. Moreover, we could imagine supercows which are bred to be scientists, still without being self-aware. The supercows are able to formulate and test sophisticated hypotheses. On behavioral grounds, a supercow might then arrive at the concept of a critter with Bayesian information processing, and it might then apply that concept to neighboring supercows and to humans. There would perhaps be nothing irrational about a genius supercow that came up with the concept of a critter with Bayesian information processing, and only later discovered that there are some entities that fall under that concept–without ever realizing that it itself falls under it. Or maybe it would observe the behavior of others, and hypothesize that there is one more supercow besides the supercows that it has observed. From that hypothesis it might make the leap to self-awareness and an <I exist> belief, or it might not.
    Another thought. There surely can be unconscious Bayesian reasoning. Some of it may even go on while we’re totally unconscious, say in deep sleep. It would not surprise us if neuroscientists learned that this happens, nor would it conceptually puzzle us very much. Neither would it surprise us if they learned that the unconscious Bayesian reasoning doesn’t assign probability 1 to <I am conscious>. And we wouldn’t say that this reasoning is irrational just because it fails to assign probability 1 to the then-false proposition <I am conscious>.
    Here’s a query, and then I’ll shut up. Is it a priori that <I exist> entails <There is an intelligent being>?

    July 1, 2010 — 14:47
  • Christian:
    My challenge then is to find SOAs that (a) are respectively parts of the SOAs (1)-(4) or are fine-grained refinements of these, (b) have all the positive intrinsic value contained in the SOAs (1)-(4), and (c) do not entail any evils.

    July 1, 2010 — 14:50
  • christian

    Hi Alex,
    I think there are SOA’s of your being a philosopher and my being a climber. I think these do not overlap. So, I think that either of these SOA’s could exist without the other. These SOA’s do not overlap because they do not contain a common constituent. If some SOA’s contain a common constituent, then they overlap. These constituents include individuals, properties and SOA’s.
    Suppose one is apologizing for a wrongdoing one did.
    If so, one is apologizing for doing x, where doing x is a wrongdoing. This is a conjunctive SOA that involves an apologizing and wrongdoing. Let’s call this SOA1 (state of affairs 1).
    Now, let’s consider the wrongdoing one did. Suppose this wrongdoing, the doing of x, is lying. Let’s call this SOA2.
    I say SOA1 and SOA2 overlap. They share a common constituent. It is a lying event that is a conjunct of SOA1. In the first case, one is apologizing for it. In the second case, one is performing it. This event is a common constituent of both SOA’s, namely by being a constituent of SOA1 and by being identical to SOA2. Thus, they overlap. Thus, these SOA’s are not distinct, they overlap.
    So, Humeanism is consistent with this case.
    I took one example from your cases. I think I can say the same thing for the others. Quite generally, there are no necessary connections between non-overlapping SOA’s. If so, there is gratuitous evil. This is because the denial of gratuitous evil assumes there are necessary connections between non-overlapping SOA’s (I think).
    At least, that’s the idea.

    July 1, 2010 — 22:57
  • So, SOA1 and SOA2 overlap. I agree. But I took your view to be that intrinsic value is borne by SOAs that do not entail the existence of evil. And it is this that I was challenging. What SOA is there that bears the value contained in apologizing for one’s having lied and that does not entail an evil?

    July 2, 2010 — 9:03
  • christian

    Well, I’m not really convinced that there is any intrinsic value in an apology. But I can suppose there is. If there is, I would say that an apology is an expression of a virtue. The idea would be that the relevant SOA is simply one in which one utters “I’m sorry for lying” and, in so doing, one expresses a virtue. I don’t think the obtaining of this SOA entails that one actually lied. However, if you think the value of an apology depends upon one’s having actually lied, then I would say that the value of an apology is not intrinsic to it. If so, I could accept that the positive value of an apology entails the existence of evil.
    Anyway, the form of Humeanism that seems plausible to me is one that restricts itself to intrinsic properties. I agree that something’s being extrinsically good could entail the existence of an evil. What I’m denying is that something’s being intrinsically good could entail the existence of some non-overlapping SOA that is intrinsically bad.
    Make sense?

    July 2, 2010 — 13:07