Scepticism about overall values of consequences
January 8, 2011 — 14:22

Author: Alexander Pruss  Category: Problem of Evil  Tags: ,   Comments: 3

Consider two theses.
ST: Sceptical Theism.
Chaos: The world is deeply chaotic and the human race is likely to have a long future.
I stipulate that both ST and Chaos are to be understood in such a way that they imply Further Value Scepticism:
FVS: For no ordinary sort of event E (I am excluding here events like the annihilation of the human race) do we have any reason to say that when we consider E’s further non-obvious consequences and aspects, including the long-term ones, the overall value of these further consequences and aspects will be negative or positive.
So, now, we have the question whether FVS implies some sort of moral paralysis. Weak moral paralysis is the thesis that for no positive action A is it the case that one ought to do A. Strong moral paralysis is the thesis that for every positive action A, one ought not do A.
Consider two asymmetry theses. First, Doing-Refraining Asymmetry:
DRA: It is significantly worse to be the cause of an evil by means of a doing than by means of a refraining, even when one is the cause unknowingly and unintentionally.
Second Good-Evil Outcome Asymmetry:
GBOA: The disvalue of being the cause of a great evil is significantly greater than the value of being the cause of a comparable great good.
I will now offer a handwaving argument for this thesis:
(*) If both DRA and GBOA are true, then strong moral paralysis follows from FVS.
(**) If at least one of DRA and GBOA is false, then neither kind of moral paralysis follows from FVS.


Here’s the argument for (**).
Suppose FVS, and suppose either DRA is false or GBOA is false (or both). I will show that moral paralysis is not true. Suppose now we have an opportunity to prevent some moderately sized evil E by a positive action A, and we have no reason to think there are any reasons not prevent E, besides the reasons coming from FVS. Then, given FVS, we should reason thusly:
– If I do A, this will prevent E, and additionally I will cause stuff whose evaluation is beyond my ken.
– If I don’t do A, then E will occur, and additionally I will cause stuff whose evaluation is beyond my ken.
First, suppose DRA is false. Then we have a symmetry between causing by doing and causing by refraining. Since DRA is false, there is no significant moral difference between causing bad stuff–and presumably, by the same token, good stuff– beyond one’s ken by doing rather than by allowing. Thus, in my evaluation of the prospects for doing A, then “I will cause stuff whose evaluation is beyond my ken” parts cancel out. Thus, I should do A. And hence, if DRA is false, then even if FVS is true, I should do A, and neither weak nor strong moral paralysis is true.
Second, suppose GBOA is false, so goods and evils are on par in my evaluations. Then we have a symmetry between causing goods and causing evils. Now, among the “stuff whose evaluation is beyond my ken”, there is potentially good stuff and evil stuff. I have no reason to think the beyond-my-ken stuff is biased towards evil rather than good. So given the symmetry between goods and evils, the potential goods beyond my ken and potential evils beyond my ken should cancel out in my consideration, and I should prevent E. Thus, neither weak nor strong moral paralysis is true.
And here’s the argument for (*).
Suppose FVS and both DRA and GBOA. Then for any positive action A, as far as I know, A’s further consequences include great evils. Of course, they may include great goods as well, and the evils and goods are on par, but by GBOA, in my practical deliberations, I should worry about the evils a lot more than about the goods. This gives me reason to refrain from A. But what about refraining from A? That, too, for all I know, may have great evils and great goods in its further consequences. So perhaps I also have reason to refrain from refraining to do A. But by DRA the reason to refrain from doing A beats out the reason to refrain from refraining to do A, since it is significantly worse to be the cause of great evils than to merely permit them by one’s inaction. Therefore, if DRA and GBOA, both kinds of moral paralysis follow from FVS.
So, the STist or Chaos theorist who wants to resist moral paralysis needs to argue against the conjunction of DRA and GBOA. At the moment, I am inclined to agree with GBOA, but I am not sure about DRA. I have some intuitions in favor of it, but I worry that they may come from the case of intentional or foreseen outcomes.

Comments:
  • Mike Almeida

    …in my evaluation of the prospects for doing A, then “I will cause stuff whose evaluation is beyond my ken” parts cancel out
    These ‘cancel out’ only if the consequences of doing A are roughly the same as the consequences as refraining from doing A, right? Suppose DRA is false. It follows that it does not matter morally whether I bring about some bad event S by causing S or by refraining from preventing S. But moral paralysis still follows from your other assumptions. Here’s your case.
    -If I do A, this will prevent E, and additionally I will cause stuff [S] whose evaluation is beyond my ken.
    -If I don’t do A, then E will occur, and additionally I will cause stuff [S’] whose evaluation is beyond my ken. (my qualifications added)
    But you do not know the relative value/disvalue of S and S’. You’re not sure what the values are of S and S’ and, presumbly, you’re not sure of the probablity of their occurring. So how do you avoid moral paralysis?

    January 9, 2011 — 10:10
  • Ted Poston

    Hi Alex, You might want to restate the principles using ‘a cause’ rather than ‘the cause’. I think some of the plausibility of the principles rest on the stronger causal claim. I think the sorts of cases that apply to ST and Chaos concern the weaker causal claim and then the revised weaker principles don’t seem to have much plausibility.

    January 10, 2011 — 8:22
  • Ted:
    Yes. Good point.
    Mike:
    “But you do not know the relative value/disvalue of S and S’. You’re not sure what the values are of S and S’ and, presumbly, you’re not sure of the probablity of their occurring. So how do you avoid moral paralysis?”
    Maybe you should think: If all I was choosing between was S and S’, I would have no reason to choose one over the other. I have no reason to choose S over S’ and no reason to choose S’ over S. But I have reason to choose non-E over E. So, overall, I have reason to choose (non-E + S) over (E + S’).
    Here’s a combination principle that would yield this:
    C1. If I have reason to choose A over B, and I have no reason to choose D over C, then I have reason to choose A+C over B+D.
    Now, maybe C1 isn’t exactly right. There might be cases where I have good reason to think that A+C is likely less than the sum of its parts, or B+D is more than the sum of its parts. But at least C1 seems to be a pretty good defeasible principle.
    Compare this case. Sam and Sarah are each offering to give me a large sum of money if and only if I come to his or her respective house tomorrow between 1 and 2 pm. They don’t tell me how much money it is, but they tell me that it’s at least fifty thousand dollars, and probably quite a bit more. Sam is in Dallas and Sarah is in San Antonio. I know nothing else about them, except that they are honest, eccentric but not particularly vicious.
    I will save about $30 in gas costs and about six hours of my time if I visit Sam, since I am significantly closer to Dallas than San Antonio. (Let’s suppose for simplicity that I shouldn’t count the added danger to life from being on the highway an extra couple of hours. Perhaps the dangers of Dallas traffic compensate for the shorter distance. And let’s suppose that I leave out of the consideration the fact that if I visit San Antonio, I can drop in on you, while I don’t know any of the philosophers in Dallas. Maybe you’re out of town.)
    Question: What should I do? Visit Sam, Sarah or neither?
    Neither isn’t an option. $50K+ is by far worth the drive either to San Antonio or Dallas.
    Answer: Visit Sam. The benefit of visiting Sam is -$20 -4hrs + $X1. The benefit of visiting Sarah is -$50 – 8hrs + $X2. I have no reason to choose $X1 over $X2 or $X2 over $X1, but I certainly do have reason to choose -$20-4hrs over -$50-8hrs. And this is true even if I have no probabilistic estimates of $X1 and $X2.
    Here’s an interesting variant of the above story. Suppose I get one other piece of information: the amounts of money that Sam and Sarah are offering me differ by at least ten thousand dollars. Now, since the difference in travel costs is worth significantly less than ten thousand dollars, the differences in travel costs will be swamped by the differences in amount of money received. If I accept some principle of indifference, I should assign probability 1/2 to the claim that I’ll do better in visiting Sam and probability 1/2 to the claim that I’ll do better in visiting Sarah. But I should still visit Sam.
    Now up the stakes. Suppose both Sam and Sarah are offering infinite payoffs. I think that I should still visit Sam rather than Sarah. (Compare this fact: Suppose I am quite certain of my salvation, and thus know that whatever happens, I will receive an infinite payoff. I still have reason to avoid minor pains.)

    January 10, 2011 — 14:57