Value Promotion and Sub-Optimal Worlds
August 9, 2012 — 19:38

Author: Dean Zimmerman  Category: Existence of God  Tags: , ,   Comments: 20

Call an objection to the existence of God a ‘problem of sub-optimal worlds’ when it appeals to the claim that God has reason to maximize the value of worlds. Since there are better (feasible) worlds than the actual one, these problems infer in some way or another that God does not exist. Although I haven’t looked at the literature closely, my impression is that every instance of the problem of sub-optimal worlds assumes a very simple relationship between the intrinsic value of a state of affairs and reasons for action (they don’t always talk about reasons, but my point could be cast in terms of virtues or whatever else one prefers). Something like this is typically assumed without comment:
Promotion: For every domain of intrinsic value D and subject S, S has reason to maximize D, i.e., for every additional degree of D that could be attained, S has reason to attain that additional degree of value.
Promotion is plausible for some domains of intrinsic value, such as welfare. It’s plausible that, for every additional degree of welfare that I could bring about in your life, I have some reason to take the necessary means of attaining that additional degree of welfare. But does Promotion hold for every domain of intrinsic value? I don’t think so.


Consider the following plausible candidates for intrinsic value:
–Beauty
–Grace
–Moral Value
–Knowledge
Beauty may be the sort of intrinsic value that provides one with reasons not to destroy or reasons to appreciate, but it does not follow that one has even defeasible reasons to maximize this domain of value. Moral value (e.g., moral rightness, moral permissibility) might be the sort of intrinsic value that results when an agent correctly responds to other non-moral values, such as welfare, but it doesn’t follow that we have reasons to maximize the moral value in the world. Someone might be in an intrinsically valuable state when he or she knows that P or when she exhibits grace. But why suppose that an agent thereby has reason to maximize the knowledge or grace that is instantiated in this world?
The moral is this: if X is a domain of intrinsic value, it follows neither that an agent has reason to maximize X nor that an agent has reason to promote X to any degree at all. Once we recognize this point, those who advance the problem of sub-optimality must provide some reason to think that the overall value of worlds is a domain of value that an agent has reason to maximize. They also must identify some problem in the following argument:

  1. There is at least one domain of intrinsic value that (i) we do not have reason to maximize, and (ii) this domain of value is partly constitutive of the overall value of worlds.
  2. If 1, then the overall value of the world is also such that it does not provide reasons to maximize (the overall value of worlds).
  3. Therefore, the overall value of worlds does not provide reasons to maximize.
  4. If 3, then God’s being unsurpassably good does not depend on whether God maximizes the overall value of worlds.
  5. Therefore, God’s being unsurpassably good does not depend on whether God maximizes the overall value of worlds.

The premises in the argument are 1, 2, and 4, so let me quickly say something about each of them. In defense of 1: I’ve already defended that the claim that there are kinds of intrinsic value that do not provide reasons to maximize. What remains to be established is that some of these kinds of intrinsic value contribute to the overall value of the world. In the four cases I mentioned above–beauty, grace, moral value, and knowledge–I think it is intuitive that a world with lots of those kinds of value is better than a world with none. I think, therefore, that there is a pretty strong case for 1.
In defense of 2: The case for 2 is reasonably strong, but perhaps less so. We can commit the fallacy of composition here if we aren’t careful. I’m not assuming this: if X doesn’t have property P and X partially determines Y, then Y does not have property P. Beauty doesn’t have the property being appropriately attributable solely to maximal states of affairs but the overall value of worlds probably does have that property. But when we are considering the property providing reasons to maximize, the sort of inference does seem fairly plausible. If the overall value of worlds is determined in part by one or more kinds of intrinsic value that do not provide reasons to promote or maximize, it would be surprising to discover that the overall value of worlds does provide such reasons. At any rate, if the problem of sub-optimal worlds is going to be a strong argument, we need some explanation of why the truth of 1 fails to reveal that we do not have reasons to maximize the overall value of worlds.
In defense of 4: The fourth premise is pretty straightforward. If God has no reason–not even merely justifying ones–to maximize the overall value of worlds, it’s hard to see why God’s goodness should be affected by whether He maximizes the overall value of the world God actualizes. Analogously, since God has no reason to maximize the number of toothpicks in the world, His failure to do so, tells us nothing, by itself, about how good God is. (I’m also assuming here, that if the intrinsic value of the world does not provide God with a reason to maximize that value, then God does not have such reasons.)
Once we see that Promotion fails to apply to some kinds of intrinsic value, the proponent of the problem of sub-optimal worlds needs to provide some argument that the value of worlds is the sort of value that provides reasons to maximize. In addition, since 1, 2, and 4 jointly provide a plausible argument that the overall value of worlds does not provide God with reasons to maximize, the proponent of the problem must explain what is wrong with at least one of those premises.

Comments:
  • Clayton Littlejohn

    Hi Chris,
    Nice post. I think you’re right about Promotion. (Although, _knowledge_? Why think that’s intrinsically valuable? I don’t think that the way to respond to the argument would be to argue that the value of worlds is the sort of value that provides reasons to maximize. It seems that you’ve killed that horse already. (Is that what precedes the beating of a dead horse?) Probably a more promising strategy would be to argue that some subset of values call for promotion and these are in short supply. As you note, welfare seems to be the sort of thing that might call for promotion. Can we simply retool the argument by saying that the other values don’t call for promotion (either because they aren’t values (e.g., knowledge) or because they don’t provide an agent with moral reasons to act (e.g., beauty) and then formulate the argument in terms of welfare?
    That’s how I’d try to run the argument (if I were to try to run an argument from sub-optimality).

    August 10, 2012 — 9:56
  • Hi Clayton,
    First, my objection here is directed solely at the problem of sub-optimal WORLDS. As far as this objection is concerned, I have no complaint against the problem of sub-optimal WELFARE DISTRIBUTIONS. This sort of objection might rely on the following two premises:
    1. In the absence of countervailing considerations, God would maximize the welfare of creatures.
    2. Probably, some person’s welfare is not maximized & and there is no countervailing consideration that would justify this failure to maximize.
    So, I think, we agree here, and I took something like this to be your main point.
    My next two points are more quibbles than anything else. Second, I do think that beauty might provide moral reasons to act; it just wouldn’t provide moral reasons to promote or maximize. The intrinsic value of beauty might explain, for example, why it might be immoral to casually destroy beautiful objects.
    Third, one could of course deny that knowledge or something else on the list is an intrinsic value. It is plausible that knowledge (or some sort of epistemic value) has intrinsic value, but I agree that one could debate this. But if one is going to put forward the problem of sub-optimal WORLDS and claim that the intrinsic value of worlds fits Promotion, I think this proponent owes us some story about which intrinsic values contribute to the overall value of worlds and how it is that the value of worlds ends up fitting Promotion. This may seem daunting and may seem like a good reason to switch to the problem of sub-optimal welfare distributions. I agree, but:
    One final point about the difference between sub-optimal welfare distributions. One nice thing about focusing on worlds is that it is harder to see that there could be countervailing considerations that would defeat God’s (alleged) reason to maximize the value of overall worlds (though see 74-78 for Langtry’s book for why the nonexistence of an optimal world would count as a countervailing consideration). God’s reason to maximize welfare seems subject to a far wider array of countervailing considerations. So IF skeptical theism is a promising strategy against the problem of evil, then it’s likely a promising strategy against the problem of welfare distributions too. So one potential downside of the problem of sub-optimal welfare distributions is that it is more susceptible to skeptical theist reasoning.

    August 10, 2012 — 13:55
  • christian

    real quick.
    i’m a bit confused about the axiology. maybe the worry is here:
    “The moral is this: if X is a domain of intrinsic value, it follows neither that an agent has reason to maximize X nor that an agent has reason to promote X to any degree at all.”
    there is a rather large laundry list of philosophers that take intrinsic value to be instantiated, iff, there is a reason to promote, desire, love, etc. the thing that has it. there are all sorts of permutations of this view. are you suggesting this is false?

    August 11, 2012 — 1:18
  • Good question, Christian. First, my objection is far more modest than the assumptions required to vindicate the problem of sub-optimal worlds. For the purposes of giving this objection, I don’t need to take a stand on which things, if any, have intrinsic value. And, for the most part, I don’t really need to take a stand on what must be true of something that has intrinsic value. If welfare is the only thing that has intrinsic value, then I think Promotion is probably true. Yet the proponent of the problem of sub-optimality can’t be happy with this. He assumes that maximal states of affairs can instantiate intrinsic value, which is incompatible with the claim that only welfare has intrinsic value. In a way, the larger point of my post is this: the problem of sub-optimal worlds requires a number of assumptions which are very controversial, so once these assumptions are pointed out, it’s hard to see why the argument should have broad appeal.
    Second, yes, lots of people have endorsed the conjunction: some states of affairs instantiate intrinsic value & Promotion is true & welfare is not the only intrinsically valuable thing. But lots of people also have rejected that conjunction. If Scanlon thinks that there is such a thing as intrinsic value, he explicitly rejects Promotion (see the chapter on values in his What we Owe to Others–if i recall his book properly). I think a virtue theorist also will be apt to reject this conjunction. I’m not deeply entrenched in the ethics literature, but my guess is that more people would reject this assumption than endorse it. (I’m interested in hearing thoughts on this guess from those better informed than me. I’m also very keen to hear what people think are the most interesting and forceful arguments for Promotion in the literature.)
    Third, I think I have elicited intuitive support for this conditional: if beauty, grace, moral value, or knowledge have intrinsic value, then Promotion is false (or at least questionable). If you don’t find this case plausible, I’m interested on hearing why. Again, I welcome suggestions for further reading.

    August 11, 2012 — 21:36
  • christian

    so many issues here. . . bear with me.
    first, i would distinguish the claim that god has a reason to better a world from the claim that god has a reason to make a world as good as he can. maybe these claims come apart. the label ‘maximization’ doesn’t wear its meaning on its sleeve.
    second, i don’t know of any version of the argument from evil that assumes your claim ‘promotion’. i do think it is a fair criticism of some of such arguments that they assume maximizing consequentialism. i think it is also important to distinguish between there being reasons, on the one hand, and some subjects having reasons, on the other. i doubt that subjects unaware of the great value of x-ing have a reason to x, for some appropriate x.
    third, i need to hear more about maximization. if i can maximize pleasure at the cost of understanding, in some particular circumstance, i may still have reason to bring about pleasure, but i think it is trumped by reasons to bring about understanding. perhaps you mean prima facie reason? or do you mean all things considered reason in the argument?
    fourth, the only thing on your list that strikes me as a candidate for intrinsic value is knowledge. my list is pleasure, virtue, and knowledge. the other things have extrinsic value, if any.
    fifth, i think that if something is intrinsically good, there is a reason to desire and promote it, all else equal. so i think there is a reason to promote knowledge and pleasure.
    sixth, if x is a domain of intrinsic value, i say there is a reason to promote anything in that domain, all else equal. if something is good then it is better that it exist, and we have reason, always, to make things better. so i reject premise 1, i think.
    seventh, i don’t understand premise 2. could it instead be read as: if 1, then there is no reason to make the world as good as it can be? if so, you would seem to reject maximizing consequentialism, i suppose. but my feeling is that there is obviously some reason to maximize the good, and always. i suspect it is simply outweighed by rights, or entitlements, or prudential considerations, or whatever in particular cases. i don’t see there being any plausibility at all in denying that there is some reason or other to maximize the good, that is, the total amount of intrinsic value there is, whatever the bearers of intrinsic value happen to be.
    nonetheless, i think i do agree with your conclusion. suppose there is no best world. ought implies can. so it’s not true that god ought to maximize. perhaps you want something stronger though. do you really want to say that god has no reason whatsoever to bring about the best world he can, even if there is a best world he can bring about?
    if there are two worlds x and y, and x is better than y, and there are no other morally relevant considerations, i think one ought to bring about y. i think god ought to bring about y too. i think ought entails there is a reason to do what one ought. so there is always a reason to make things better, all else equal. are you rejecting this claim?
    ~christian

    August 12, 2012 — 1:05
  • Hi Christian,
    Here is a response to your first two points. I’ll get to the others tomorrow. First, by “maximize” I have something like this in mind: S has reason to maximize domain of value D iff, for every additional degree of D that the subject could bring about in the circumstances, S has reason to bring about that degree of D. So, if God has reason to maximize the overall value of worlds, then God has reason to bring about the best world God can (usually assumed to be a world, such that there are none better). Alternatively, God might have reason to satisfice with respect to the overall value of worlds, which would mean that God has reason to make a world “good enough” but does not have reason to make the world any better. The proponent of the problem of sub-optimal worlds must appeal to the very strong claim that God has reason to maximize the value of worlds. Reason to satisfice isn’t sufficient for his purposes.
    You are right that a more careful discussion of maximization and a more careful formulation of Promotion would be more sensitive to the distinction between there *being reasons* and *one’s having reasons*. But I’m ignoring that distinction because it is generally assumed that God possesses all the reasons there is for God to possess. If that assumption is false, it’s falsity may cause additional trouble for the problem of sub-optimal worlds: e.g., there might be reason to create the best possible world, but God doesn’t possess that reason.
    Second, I take there to be a number of very important differences between problems of evil, on the one hand, and problems of sub-optimality, on the other. Problems of evil typically try to get by with much more modest assumptions than the problems of sub-optimality. So while I agree with you that Promotion is not typically appealed to in the literature on the problem of evil, it *is* relied on in the literature on the problem of sub-optimality. For example, although Wielenberg’s 2004 paper on whether God must create the best world doesn’t talk about reasons, I think it’s pretty clear that he is relying on a version of Promotion cast in terms of virtue. I’m also pretty sure that Leibniz had something like Promotion in mind when he said that God must create the best.

    August 12, 2012 — 4:53
  • John Alexander

    I am not sure that it is clear what ‘has a reason’ refers to; either 1) S possessing a reason R for doing x or 2) there is a reason R that S should possess in deciding whether or not to do x. If you mean 1 then it seems that your argument would works, but that would not be very interesting in itself. If you mean 2 then it seems that the proponents of the suboptimal world problem might have a point. It is also not clear that we need to understand ‘suboptimal world’ as one that has less than the maximum value of whatever it is we are considering. Let us assume that there is no maximum value to anything but that there can always be an increase in value, v+1. A suboptimal world would be one where v-1 is the case.
    Many people accept that the only reason for allowing evil to exist is that it is necessary for a greater good to exist or to eliminate an equal or greater evil. Let us imagine a scenario where a good being has to decide whether to create a world, W1, with beings that do not get cancer or a world, W2, where beings do get cancer. In all other respects W1 and W2 are duplicates regarding the existence of other illnesses and evils. W2 is a suboptimal world. It is clear that some good things do occur (not in all cases to be sure, but that is not required) as a result of people getting cancer. But it is also clear that these same goods occur in W1. If the same goods would occur in W1 and W2 then it seems that the creator should create W1 if he or she is a good being.

    August 12, 2012 — 11:46
  • Continued from previous reply to Christan. Third, one can have a reason that is defeated. So, depending on which terminology you prefer, I’m talking about prima facie or pro tanto reasons.
    Fourth and fifth, I intended to pick out the virtue of grace, but I could have just as easily focused on the virtue of honesty, or whatever virtue you prefer. Suppose, somehow, by murdering someone, you can raise Fred’s virtue by some small amount. Do you really think that you have a reason to murder that person because doing so would promote Fred’s virtue to some small degree? (Of course, the reason would be defeated, but I’m asking whether you think there is a reason in the first place.) Also, if I had reason to promote knowledge without limit, I would have some reason to open up the phone book and start memorizing random phone numbers. But do you really think that I have some reason (even if defeated) to know trivial facts? I would also have some reason to create new people simply because they would increase the number of times *being known* is instantiated in the world. Do you really think that you have such a reason?
    More importantly: the proponent of sub-optimal worlds cannot limit himself to your list of intrinsically valuable things, at least not if intrinsic value is to motivate her objection. Anyone giving that objection to the existence of God must insist that maximal states of affairs are the sorts of things that can bear intrinsic value, but maximal states of affairs don’t have virtue, pleasure, and they don’t know anything. Perhaps you meant to include that in your list.
    Sixth, I agree that the following principle has intuitive force: if we know that X would make something better, then we have some reason to pursue it. Nonetheless, I think once you start thinking about some plausible candidates for intrinsically valuable states of affairs (e.g., knowledge, virtue, and beauty), the principle seems to have counterexamples.
    Seven, my objection is, I think, much more friendly to the sorts of maximizing consequentialism that people outside of phil religion tend to endorse. Take utilitarianism, for instance. If moral value is exhausted by one’s promotion of welfare, pleasure, or happiness, then the problem of sub-optimal worlds does not get off the ground (or it simply reduces to the problem of sub-optimal welfare distributions–see my reply to Clayton). Yes, my objection is incompatible with this version of consequentialism: an act is morally good to the extent that it promotes the overall value of worlds. Although I’m not super entrenched in the literature, my impression is that no one in mainstream ethics holds such a version of consequentialism.
    Your “Nonetheless”: good point about ought implies can. But yes, I’m after the stronger claim that there is no reason in the first place for God to create a world such that there is none better.
    Your last paragraph: for the purposes of giving this objection, I need take no official stand on the claim that there is always a reason to make things better. But, if that claim is meant to apply to all domains of intrinsic value and virtue and knowledge are intrinsic values, then yes I reject that claim.

    August 12, 2012 — 17:22
  • Hi John,
    You distinguish between two senses of ‘having a reason’ and then you state some implications of the distinction. I’m not sure I understand your point there. Can you please elaborate?
    By ‘sub-optimal world,’ I mean a world such that, there is at least one that is better. An optimal world is one such that there are none better. I think my use of ‘sub-optimal’ captures the sort of case you had in mind.
    I think your second paragraph very subtly changes the subject. I’m talking about arguments that rely on very strong principles concerning God’s reasons to add good to the world. Your second paragraph concerns God’s reasons to prevent bad from entering the world. God might have reasons to minimize the bad without having reasons to maximize the good. Essentially, your second paragraph has changed the subject from problems of sub-optimality to the problem of evil.
    In addition, I think your second paragraph affirms the very common and seemingly obvious claim that God’s existence is incompatible with gratuitous evil, evil that serves no purpose. As it turns out, however, the standard formulation of the problem of evil is committed to the possibility that God allows gratuitous evil. This will be the subject of a future blog post, but it might be a while before I get around to it.

    August 12, 2012 — 17:36
  • christian ryan lee

    thought i would write in my full name. wasn’t intending to seek anonymity or anything. so i have a few thoughts.
    perhaps the distinction between having a reason and there being a reason is a red herring in the case of god. i’m not sure though. agreed that he would be aware of all the reasons there are, but that may still not suffice for his having those reasons that he is aware of. for example, i like chocolate and suppose you don’t. that i like chocolate gives me a reason to eat some, but that you are aware that i like chocolate doesn’t, not clearly anyway, give you a reason to give me chocolate. in the case of moral reasons, or reasons generated by facts regarding intrinsic value, perhaps WE ALL must have reasons to promote the value in question. but YOU can’t say this, in which case it would be good to hear what distinguishes the reasons you have in mind, with the reasons provided by the chocolate case.
    now, if you are using ‘reason’ in the pro tanto sense of the term ‘reason’, i think there are problems. i mean, think that is the right way to use the term, but it’s just that the argument, i suspect, doesn’t go through on that understanding. i do think one has a reason to promote fred’s virtue and knowledge. it seems more clear in the case of raising one’s children. but i think the temptation to deny this in the case of others can be explained away since we tend to hold other people responsible for their own character development, and so we are more likely to think that they, and not us, have reason to promote their own character. however, i think this tempting thought is mistaken. i AM my brother’s keeper. i do have obligations to make other people better people. in any case, i do think we have reason to promote virtue and knowledge when we are aware of various circumstances in which we can do so, but this reason is often overridden. in fact, i think this is the clearest case of having a moral reason to promote anything, much more clear than pleasure.
    “Anyone giving that objection to the existence of God must insist that maximal states of affairs are the sorts of things that can bear intrinsic value, but maximal states of affairs don’t have virtue, pleasure, and they don’t know anything. Perhaps you meant to include that in your list.”
    Fair point! I do NOT mean to include this into my list. I mean to restrict my claim about intrinsic value to BASIC intrinsic value states. this is the industry standard in the literature and i think it is the right response to your worry.
    i think a great many philosophers are maximizing consequentialist. in particular, i am thinking of feldman, norcross, and kagan. and having gone to many ethics conferences, i have become aware that this view, if not the most widely accepted view, is certainly high on the list. the more finessed versions adjust utility for desert, so that though one ought to maximize the good, this may not mean maximizing happiness. but i take it you reject this view too?
    “I need take no official stand on the claim that there is always a reason to make things better. But, if that claim is meant to apply to all domains of intrinsic value and virtue and knowledge are intrinsic values, then yes I reject that claim.”
    maybe we have different conceptions of intrinsic value? i’m thinking along the lines of moore (1903) and ross (1930), and more recently scanlon, and zimmerman, oddie, and lemos. is there someone that defends the sort of axiology you have in mind in print? Here is a semi-recent discussion that may be relevant?
    http://peasoup.typepad.com/peasoup/2007/03/help_jamie_fix_.html

    August 12, 2012 — 18:20
  • Your first two paragraphs (ignoring the one about your name): I’m not sure I understand the point you are making with the chocolate case. Sorry. I think the second paragraph essentially boils down to us having different intuitions about when one has a reason rather than any special issue regarding pro tanto vs all things considered reasons.
    In some ways, I think we are at cross purposes. I want to say only as much about intrinsic value as necessary to resist a very specific sort of objection to the existence of God (which wasn’t as clear in the initial post as it should have been). I think you are more interested in having a conversation about intrinsic value that is independent of that context. That’s okay, and I’ve found the conversation helpful, but I think our different purposes have led to some confusion. It sounds like you are willing to concede my main point, namely that the problem of sub-optimal worlds cannot be motivated by appealing to the intrinsic value of maximal states of affairs. On your view of intrinsic value, maximal states of affairs simply fail to have intrinsic value.
    Yes, maximizing consequentialism is a major player these days. But I’m not sure that any view of maximizing consequentialism that is currently advocated in the mainstream literature can support the problem of sub-optimal worlds as I am thinking about it (the problem of sub-optimal welfare distributions is another matter). The consequentialisms that I’m most familiar with are very welfare-focused. But, as I’ve said above, if we are going to limit our list of intrinsically valuable things to welfare, then I think promotion is true. More generally, it is no contradiction to say that both welfare and beauty are intrinsically valuable, but that moral value consists solely in maximizing welfare.
    Perhaps the way to think about my objection is in the form of a dilemma. Either maximal states of affairs have intrinsic value or they don’t. If they don’t, the proponent of sub-optimal worlds can’t appeal to intrinsic value to motivate his objection. If they do, then the proponent is committed to rejecting any maximizing consequentialism, including many welfare-focused ones, that does not allow maximal states of affairs to have intrinsic value. In addition, (X): if maximal states of affairs do have intrinsic value and if knowledge, virtue, or beauty are intrinsically valuable, it is quite plausible that God fails to have reason to actualize a world such that there is none better. Obviously, you disagree about X. But notice that X is rather modest conditional that puts very few constraints on which moral theory is true. In particular, it is compatible with welfare-focused, maximizing consequentialism.
    Consider this remark of Scanlon’s: “The idea that to be good is simply to be ‘to be promoted’ can seem an extremely natural, even inescapable one. It is plausible to think that, as Shelly Kagan suggests, the good simply is that which we have reason to promote. But although there are many cases in which this is true…when we consider the particular things that most philosophers have cited as instances of the good, it becomes quite implausible to hold that all our thinking about value can be cast in this form” (What we Owe, 87). Now this isn’t quite a rejection of Promotion, but I think it’s clear that Scanlon is sympathetic with the idea that some kinds of intrinsic value do not come with reasons to promote (much less maximize) (see also ch 2, section 3). I talked with my colleague, Christine Swanton, a couple hours ago. If I understood her position in conversation, she would rather not talk about intrinsic value if she can avoid it, but she denies that all the virtues are maximizing ones (see her Virtue Ethics: A Pluralistic View). For the purposes of evaluating the relevant problem of sub-optimality, that amounts to roughly the same thing as Promotion.
    More broadly, one can affirm Moore’s isolation test for intrinsic value or hold any of the usual characterizations of ‘intrinsic value’ but go on to deny that all intrinsic values come with reasons to maximize those values. I very quickly skimmed Zimmerman’s SEP entry on intrinsic value, and I couldn’t find any suggestion that, by definition, intrinsic value necessarily comes with reasons to maximize. (But maybe i skimmed too quickly.) If something like Promotion were as widely acknowledged, uncontroversial, and central to the concept of intrinsic value as you seem to think, it would be at least a little surprising that it didn’t make an appearance in the SEP entry.

    August 12, 2012 — 22:22
  • “Also, if I had reason to promote knowledge without limit, I would have some reason to open up the phone book and start memorizing random phone numbers. But do you really think that I have some reason (even if defeated) to know trivial facts? I would also have some reason to create new people simply because they would increase the number of times *being known* is instantiated in the world.”
    Yes, I think I have reasons for all of these, but they tend to be defeated (including in the last case the standing defeater that this is not the sort of reason for which reproduction is permissible, because it uses the procreated people as a means).
    Some philosophers distinguish between requiring and justifying reasons. If you have a requiring reason, you should go with it, unless you’ve got defeaters or reasons to the contrary. But a justifying reason makes it rational to do something, but it is not irrational to refrain from it even in the absence of defeaters or contrary reasons. That distinction seems to be relevant here: I could see someone taking your intuitions and defending the view that you don’t have requiring reasons for any of these things, but you do have justifying ones.
    I have no room for this distinction myself: I think, with Aquinas’ first principle of the natural law, that the good is what is to be pursued. The idea of goods that do not give a requiring reason to pursue is really weird to me.
    I would, however, distinguish the thesis that for any intrinsic type of good that can be pursued you have a reason to pursue it from the thesis that you have reason to maximize every intrinsic type of good. To have reason to pursue every available particular instance of knowledge is not the same as to have reason to maximize instances of knowledge. It could in principle be the case that every available particular instance of knowledge is good and hence worth pursuing, but that the maximization is not good or worth pursuing.

    August 13, 2012 — 8:01
  • christian ryan lee

    1. Necessarily, if x is intrinsically good, then there is a reason to promote x.
    2. Necessarily, if x is intrinsically good, then there is a reason to favor x as such.
    Fitting attitudes accounts entail that if x is intrinsically good, then the presence of x is better than its absence. Thus, we should prefer, or favor the presence of x to its absence as such. I suppose it is a further move to claim that, if we have a reason to prefer, then we have a reason to promote. Maybe you could claim that there are intrinsically valuable states that God has reason to favor, but not promote?
    In any event, I’m with Alex above.

    August 13, 2012 — 11:09
  • John Alexander

    Hi Chris
    “You distinguish between two senses of ‘having a reason’ and then you state some implications of the distinction. I’m not sure I understand your point there. Can you please elaborate?”
    I was simply trying to get clear on how you where using the term. If it was used possessivley then it follows that if S does not possess a reason to do x then S will not do x and cannot be blamed for not doing x because S has no reason to do x. If it is meant to imply that there is a reason indepentently of S possessing it that S should possess then S can be blamed for doing x. At times you seemed to be using it both ways, but that is probably a misreading on my part.
    I do not thnk that I am equating the problem of sub-optimal worlds with the problem of evil, although I do think that it is a problem within the problem of evil. Regarding suboptimal worlds, it seems that if W1 and W2 are identical in number of diseases, other then cancer being in W2 and not W1, and other types of evils then if evil is a necessary condition for certain goods, i.e., compassion, service to others, caring, gratitude, etc., then if these goods can be accomplished in a world with one less evil in it then ours then that is the world that God should have created. This seems a warranted conclusion in so far as new diseases have entered the world without an increase in the types of goods that exist. There may be more people caring because there are more people suffering, but caring existed prior to AIDS, for example, so no new good has resulted from having a new disease. This is obviously related to the paradox of the heap in so far as if we can get all the goods that are desired with 100 diseases why not 99. It seems that we can, so God should have created a world with less diseases in it – hence our world is a suboptimal world and one that should not have been created had the creator been a good being because God has a reason for not creating it.
    I would also argue that when we eliminate or prevent an evil we do increase the amount of good over the amount of evil. Assume that if there are 100 diseases there are 10 desired goods. These desired goods can be obtained with 99 diseases so the ratio of good to evil increases when we lower the number of diseases. Another way to look at this is: assume that there is a world with 10000 people, 100 diseases and one desired good, caring for, that needs diseases to exist if it is to come about. In this world 1000 people have diseases, 10 with each disease. Let us assume that the rest of the people are exhibiting the desired good in their actions towards the diseased people. If we eliminated one disease there would be only 990 people having a disease but there would now be be 9010 people exhibiting the desired good. If a world with as many people as possible exhibiting the desired good is the optimal world then the world with 100 diseases is suboptimal.

    August 13, 2012 — 11:26
  • Chris Tucker

    Hi Christian,
    Since favoring and promoting are distinct responses, I do think one needs an argument to go from 2 to 1. (Well, this might depend on what you mean by ‘favoring’. But generally speaking, a reason to take a certain kind of attitude toward a state of affairs does not entail a certain reason to promote.)

    August 13, 2012 — 17:14
  • Chris Tucker

    Hi Alex,
    It sounds like my intuitions aren’t as widely shared as I was hoping. But I have some quibbles with some things you said. First, there is no problem with using people as means; it’s using them as MERE means that’s problematic. If the value of knowledge gives you reason to create new people and the value of people gives you reason to not treat them as mere means, you end up with reason to create new people in a way that doesn’t treat them as mere means.
    Second, I also think the requiring/merely justifying distinction causes trouble for the problem of sub-optimality, but I wanted to see how far I could go with the claim that some kinds of intrinsic value don’t provide even merely justifying reasons to promote that value to any degree at all. It seems that this claim is too controversial to take me very far, so if I pursue these ideas further, I may have to rely more heavily on the requiring/merely justifying distinction.
    Third, your final paragraph suggests that you might have room for the requiring and merely justifying distinction. Consider the following view: S has requiring reason to promote every intrinsic value V to some threshold (which could be vague), but after that threshold is reached, S either has no reason to promote V further or S only has merely justifying reason to promote V further. While you won’t give the requiring/merely justifying distinction work to do when it comes to the kind of reason one has to promote (or not to promote), you might give it work to do when you consider the kind of reason one has to promote value above a certain threshold.
    Finally, I will just point out that the view of intrinsic value you sketched is incompatible with Promotion, which requires reason to maximize. Since Promotion is required to underwrite the problem of sub-optimal worlds, you can agree with me that the problem of sub-optimal worlds relies on an implausibly simple view concerning the connection between intrinsic value and reasons. Although you and Christian reject the particular view of intrinsic value I put forward, your accounts of intrinsic value also undercut the problem of sub-optimal worlds in some way or another.

    August 13, 2012 — 17:42
  • Chris Tucker

    Hi John,
    On the senses of having a reason: You weren’t misreading me. That’s an issue that Christian also raised (see the second paragraph of my reply on Aug 12, 4:53pm). I basically ignored the distinction because it doesn’t matter in the case of God (or if it does, it only makes the trouble for the problem of sub-optimal worlds worse).
    As far as your disease cases go, I’m not sure I disagree with your assessment. But I don’t think that the proponent of the prob of sub-optimal worlds can rely too heavily on the intuitions in your cases. One might hold that those intuitions are best explained by God’s having reason to prevent evil rather than God’s having reason to promote good. Consequently, someone who rejects Promotion could agree with your judgment about the disease cases but then deny that God has reasons to maximize our welfare. Our reasons to prevent the bad needn’t reduce to our reasons to promote the good.

    August 13, 2012 — 18:02
  • christian ryan lee

    hey chris.
    in the response above i must have accidentally deleted some material. no bother.
    yeah, you’re right. i want to say that all intrinsic value is such that, on contemplating that which has it, merits favor. this includes knowledge and virtue and pleasure. we should give a thumbs up to these things, so to speak.
    i can think of no reason to think that, when it comes to promotion, these goods should be treated differently. it is the property being intrinsically good that generates the reason to promote, and they all share it. but i DO think that different kinds of goods merit different kinds of responses. so this second inference, the inference from *merits favoring* to *merits promotion* needs to be defended. i need to think more about the right way to do that.

    August 14, 2012 — 15:53
  • Chris:
    Actually, the view I sketched may be compatible with Promotion.
    Promotion says: “For every domain of intrinsic value D and subject S, S has reason to maximize D, i.e., for every additional degree of D that could be attained, S has reason to attain that additional degree of value.”
    Now, in the view, I deny “S has reason to maximize D” but I am happy to affirm “for every additional degree of D that could be attained, S has reason to attain that additional degree of value.” In particular, I deny your gloss on “maximize”. But the part I affirm is enough to generate the problem of sub-optimal worlds (which I handle through widespread incommensurability–it’s not that hard for a world to be optimal, i.e., such that there is none that is better simpliciter).
    I have a really hard time seeing how there could be a good that I see I can promote but that I have no reason at all to promote. Wouldn’t it be a failure of my love for you not to take the fact that an action would bestow a good on you to be a reason in favor of that action? Sorry, this is just argument by rhetorical question.
    I have to admit that I am a reductionist about reasons. I think an objective reason in favor of an action is nothing but the fact that the action promotes a good or hampers a bad, and a subjective reason in favor of an action is nothing but the taking of the action to promote a good or hamper a bad. (But unlike the utilitarian, I don’t reduce the decisiveness of reasons to maximization of the good.) (There is some trickiness here about the description under which the “a good” and the “a bad” are to be read, and I am not completely happy with what I want to say there, but that’s not apposite here.)

    August 15, 2012 — 21:08
  • Sorry for sluggish replies. I didn’t realize there was a new comment to respond to.
    Christian, here’s something to think about: I might favor P without favoring every way of P’s being actualized. For example, I might have a pro-attitude toward winning the greatest philosopher ever prize but have a con-attitude toward the only way I will win that prize: bribing the judges.
    Alex, yes, what you affirm is enough to generate the problem of sub-optimal worlds. I misunderstood what you were rejecting. Yes, I get the intuition you want me to have about promoting the welfare of other people. I do think something like Promotion is very plausible with respect to the welfare of people. It’s just other plausible candidates for intrinsic value that I think don’t fit Promotion.
    Also, it’s not obvious that your view is awkward. It might be that all intrinsic values have something in common, namely that something like Promotion applies to them. But then some intrinsic values come with more “oomph” than others.

    August 18, 2012 — 1:59
  • Leave a Reply

    Your email address will not be published. Required fields are marked *