Skeptical Theism and Morality
January 3, 2011 — 14:18

Author: Ted Poston  Category: Problem of Evil  Tags:   Comments: 77

Almeida & Oppy (2003) argue that if the considerations deployed by skeptical theists are sufficient to undermine evidential argument from evil then those considerations are also sufficient to undermine inferences that play a crucial role in ordinary moral reasoning. They consider some specific apparent evil that one could easily prevent and then they reason:

“Plainly, we should also concede by parity of reason that, merely on the basis of our acceptance of ST1-ST3, we should insist that it is not unlikely that there is some good which, if we were smarter and better equipped, we could recognize as a reason for our not intervening to stop the event. That is, our previous concession surely forces us to allow that, given our acceptance of ST1-ST3, it is not unlikely that it is for the best, all things considered, if we do not intervene. But, if we could easily intervene to stop the heinous crime, then it would be appalling for us to allow this consideration to stop us from intervening. Yet, if we take the thought seriously, how can we also maintain that we are morally required to intervene? After all, as a result of our acceptance of ST1-ST3, we are allegedly committed to the claim that it is not unlikely that it would be for the best, all things considered, if we did not do so.” (506)

I don’t think this is right. Consider the following analogy:
Suppose Sam is the president of Acme Anvil Company. Sam discovers some systemic abuse is occurring in his company (anvils falling from the sky…) and he has the power to stop it. Yet, Sam doesn’t stop it because he wants to see how is mid-level managers respond once they discover it. The mid-level managers discover the abuse and then reason “well, Sam knows about this and he’s doing nothing. So there’s probably a reason he has that justifies his not preventing this. So we have a reason not to prevent this.”
Intuitively, this is bad reasoning on part of the mid-level managers. They should prevent the abuse even though they know that Sam knows about it and that he has the power to prevent it. Whatever Sam’s reasons are, they don’t carry over to reasons for the mid-level managers.

Comments:
  • Mike Almeida

    Ted,
    This is an interesting case. I agree that the reasoning in your case is bad. But I think our case is much stronger than this, since we are talking about a “president” (viz. a perfect being) that fails to prevent evils for reasons that we cannot so much as imagine. The mid-level managers can imagine a reason: they can imagine that their initiative being tested or the boss is slacking off or he is deeply evil and so on. There are lots of imaginable reasons. But this is unlike the case in which God is letting sentient animals suffer horribly in utter solitude–unknown to anyone else, unpreventable by anyone else, not even an occasion for compassion or the exercise of other virtues, since there is literally no one else there besides the suffering creature and God (please don’t tell me that they’re probably not really suffering much or I’ll have to break my monitor or something). If there is some unimaginable good that comes from this, I of course haven’t a clue what it might be. But, if you insist that there probably is such a good anyway, then just such a good is, for all we limited folks know, obtainable in lots of other cases of serious moral decision. The skeptical theist certainly cannot say he knows it isn’t so obtainable. The reasoning is perfectly symmetrical. When I’m helping others in distress, then, for all I know, I’m preventing this spectacular, albeit completely unimaginable, good from obtaining. There is nothing answering to this possibility in the case of a fallible president who might have any number of imaginable reasons (including just forgetting to prevent the anvils from falling, or being indifferent to the harm that might result, or testing initiative, and so on and on.).

    January 3, 2011 — 15:01
  • Should we fault Sam’s middle managers if they believe that Sam is an all-knowing, perfectly good, perfectly free, all-powerful being who has the power to instantly, effortlessly rectify any situation of which he disapproves? It seems to me that they would be positively remiss to try to solve any problem for themselves because it seems that any state of affairs of which Sam does not approve could be either immediately rectified, or if you prefer, Sam would have organized things from the beginning to not permit any state of affairs of which Sam would not approve.

    January 3, 2011 — 16:28
  • Clayton Littlejohn

    I don’t want this to be totally off topic, but it does seem to me that there’s a reply here available to the skeptical theist who wants to say that our ignorance of what God should do doesn’t subvert our ignorance of what we should do. The move rests on a principle that I don’t accept (and reject most days of the week), but it’s a principle that many others accept and given that these people are already committed to bad views, why not put them to use once more.
    At any rate, let’s consider two views about obligation–what you ought to do is what’s objectively best (objectivism) vs. what you ought to do is what’s prospectively best (i.e., what maximizes expected value, prospectivism). The key here is that what’s prospectively best for A to do depends upon what’s valuable and what A’s evidence is, not some other subject’s evidence.
    (i) For God, the objectively best = the prospectively best. (God’s evidence consists of all the facts)
    (ii) For mortals, ~(the objectively best = the prospectively best). (Our evidence doesn’t consist of all the facts)
    What anyone (God or mortal) ought to do is what’s prospectively best and that depends upon what’s objectively valuable and what that agent’s evidence is. Since we know that prospectivism is true and know (i) and (ii), even if we know we can’t know what God ought to do, it doesn’t undermine our knowledge of what we ought to do. That’s a matter of maximizing utility given our evidence, not God’s. Sure, lot’s of unfortunate things happen when we maximize expected value, but that’s not to say that we failed to do what we ought to do. This is the lesson of the Regan-Jackson-Parfit mineshaft case.
    Anyway, I’m curious to know if there’s something wrong with this move apart from the appeal to prospectivism.
    [Or, something like that. Is the problem with this the prospectivist view of all things considered “ought”? Again, I think that’s a problematic view, but it’s one that’s available and currently quite popular. In some circles, I think it’s the one that everyone takes to be obviously true. One problem with it, you might think, is that the prospectivist has to say that God watching us mortals can’t say that when we do what is prospectively best but not objectively best that we “ought” to have done things differently, but we can toss in some relativism about deontic modals and take care of that, too.]

    January 3, 2011 — 16:55
  • Clayton Littlejohn

    “but it does seem to me that there’s a reply here available to the skeptical theist who wants to say that our ignorance of what God should do doesn’t subvert our ignorance of what we should do.”
    Of course, I meant to say that there’s a reply available to the skeptical theist who wants to say that our ignorance of what God should do doesn’t subvert our _knowledge_ of what we should do. (Freudian slip. Prospectivists don’t know what they ought to do.)

    January 3, 2011 — 16:57
  • John Alexander

    Ted
    Can modify your case a bit and argue as follows: Sam knows that evil is occurring which he could stop but does not do so because he wants to see what is mid-managers do. Suppose one of the mid-managers is killed band and this is how the other mid-managers find out about the evil. They stop the evil, but they would have stopped it had no one been killed. The mid-managers reason that Sam knew about the evil and choose not to end it. They reason, correctly, that one possible reason he did not end it was to see what his mid-mangers would do. Is Sam still justified in not stopping the evil or can the mid-managers infer that Sam is not a good president because he could have saved the person killed, but choose not to?

    January 3, 2011 — 17:00
  • Mike Almeida

    (i) For God, the objectively best = the prospectively best. (God’s evidence consists of all the facts)
    (ii) For mortals, ~(the objectively best = the prospectively best). (Our evidence doesn’t consist of all the facts)

    Suppose the skeptical theist is right that all of the evils that we think, given our limited knowledge, should be prevented (but cannot prevent), are really such that a greater good comes from them. If that is so, then we are being told that we cannot rely on our intuitions about what is prospectively best. What we learn from the skeptical theist is that our intuitions about what is prospectively best are very bad evidence for what is objectively best. That’s what generates moral paralysis. To suggest that such intuitions are good in just those cases we happen to be in a position to prevent but bad in all of the other cases–fawns dying, childen being terribly abused, mass murders, genocide, infants being microwaved, and on and on interminably–is seriously question begging.

    January 3, 2011 — 19:29
  • Clayton Littlejohn

    “Suppose the skeptical theist is right that all of the evils that we think, given our limited knowledge, should be prevented (but cannot prevent), are really such that a greater good comes from them. If that is so, then we are being told that we cannot rely on our intuitions about what is prospectively best.”
    I take it that the prospectivist skeptical theist (PST) would say that what’s objectively best doesn’t matter to obligation, only what’s prospectively best (i.e., maximizes expected moral value). The one exception is the case of omniscience. I’m not sure that the problem with the PST is paralysis, exactly, but liberation. So, suppose the PST starts out believing lots of parts of commonsense morality but then sees a lot of apparent violations of it by God (e.g., believing that anyone who could save the fawn would if they were good but then revising this judgment). I don’t know if the PST ought to think that (a) her list of reasons is too short, (b) too long, or (c) that the facts are too complicated to make correct judgments about what others ought to have done, but as her confidence in her list of reasons wanes along with her confidence in the thought that she’s able to work out the relevant consequences, it will be harder for her to be on the hook for doing what morality requires. Hence, the liberation. For her to still be obliged to, say, intervene to save a small child, her saving the small child would have to come out first on her ranking of what’s prospectively best and as she loses a grip on what reasons bear on whether to act, there will be less and less available to her that would mandate that ranking. So, it might be that the problem with PST is not so much that they’ll be paralyzed trying to figure out where there duty lies but that we’ll be powerless to explain to them that given their epistemic situation they were obliged to do the things we know that they were obliged to do.
    The short version. Paralysis only happens when you don’t know where your obligation lies. On PST, ignorance subverts obligation and so it’s not hard to find as the fact that it’s hard to find (so to speak) means that that’s not where your obligation is found. So, it “liberates” the agent from moral bonds we know she’s under.

    January 3, 2011 — 20:02
  • David Warwick

    Mike,
    “What we learn from the skeptical theist is that our intuitions about what is prospectively best are very bad evidence for what is objectively best. That’s what generates moral paralysis.”
    Worse than that, I think, it leads to human amorality. If we work from the assumption that everything is already happening for the best, then why would we intervene if we saw, say, a child opening up a box of rat poison it thought was candy? Why not, for that matter, poison the child yourself? If it happens, it’s the divine will. Many of the people who claim divine revelation, of course, are serial killers who believe they are doing God’s work. If we don’t know God’s motives, can we assert that they are wrong?
    Clayton,
    “Paralysis only happens when you don’t know where your obligation lies.”
    There are many cases, every year, of religious parents refusing their children medical care because they are praying instead. Given theism, and based on instances from the Bible, they are doing what God demands of them, and they accept the possibility that God’s will is that their child dies. I’m happy to say that what they are doing is wrong, and in practice the vast majority of theists would agree (and, more to the point, not act the same way).
    But this ‘pragmatic theism’ – broadly ‘trust in God, but look both ways before you cross the road’ is functionally identical to pragmatic atheism. People consistently put an obligation to practical life before God. If that’s the divine will, there is the awkward fact that virtually all the holy men and scriptures throughout history have said it isn’t.

    January 4, 2011 — 6:38
  • Ted Poston

    Clayton, Thanks for bringing prospectivism into the discussion. Prospectivism provides a good explanation of the intuition I’m after in the analogy. The obligations the mid-level managers have aren’t sensitive to Sam’s reasons (as such). I take it that the core prospectivist claim is that what a subject ought to do is a function of what the subject’s knowledge is. A subject’s ignorance of another’s reasons doesn’t affect what they ought to do.
    Mike & Clayton, On prospective skeptical theism (PST), can’t you get the result that a subject’s intuitions about what they *ought* to do is perfectly reliable? Given our knowledge of pro tanto reasons, we ought to prevent easily preventable (apparent) evils. By the PST lights, what’s problematic is a claim about what God ought to do.
    Fwiw: We have two body of claims: claims about rights, permissions, obligations and then claims about goods, evils, the best. On prospectivism, the deontic group of claims are grounded in a subject’s knowledge about the axiological group. On PST, our ignorance about the limits of the axiological group just doesn’t affect our knowledge of the deontological group (as concerns us). But to get the evidential argument from evil working one needs either a claim either about a perfect being’s obligations or a claim about the limits of the axiological group.

    January 4, 2011 — 8:51
  • Luke Gelinas

    Why wouldn’t the belief that God may be working to accomplish some very great good by means of a given preventable evil figure in the body of evidence on which the subject’s prospective ought is based?

    January 4, 2011 — 10:14
  • David Warwick

    “But to get the evidential argument from evil working one needs either a claim either about a perfect being’s obligations or a claim about the limits of the axiological group.”
    A common theistic claim is that there are absolute goods, that ‘moral relativism’ is a bad thing. While we all have different obligations, we are, if you like, all trading the same currency.
    How can it then hold, then, that there’s a moral code for us, and a separate one for God? Clearly, as his subjects, we have obligations he doesn’t (to pray to him, to honor him and so on). And there are things we can do he can’t – honor our father and mother, at the very least.
    To use an example from today’s headlines:
    http://news.morayfirthlive.com/2011/01/03/999-tea-break-ambulance-driver-keeps-job/
    An ambulance driver didn’t treat a woman 800 yards away because he was on his break. She died.
    We can nitpick this example all we like, and clearly it’s important for emergency workers to take breaks and perhaps she would have died anyway blah blah.
    The basic moral point remains. Someone with the power to intervene failed to, someone died.
    1. Given it was wrong for the ambulance driver not to respond.
    2. Given a God with the ability to intervene.
    3. Given absolute moral truth.
    … how is God not wrong for also failing to intervene?

    January 4, 2011 — 10:36
  • Clayton Littlejohn

    Hey Ted,
    Quick post, I’m polishing off syllabi today. (I have a sneaking feeling if I post too much here and can’t get my syllabi in, my chair isn’t going to buy my excuses about being too busy to get them in on time.)
    Some prospectivists (e.g., Zimmerman) will caution against reading too much into this talk of what’s best. I think his modus operandi is to introduce the notion of what’s deontically best in various senses (subjective, objective, prospective) and argue that “ought” goes with one of these notions. Foot similarly thinks that talk of what’s best as a guide to doing what you ought to do shouldn’t be taken to indicate any commitment to consequentialism or the thought that the good is prior to the right. (She thinks we’re tricked into consequentialism by failing to see that we use what we have to figure out the right in order to figure out what’s best.)
    I actually worry about PST because it makes us too reliable, it fails to do justice to the fact that morality is quite difficult. There’s a similar complaint about subjectivist views of moral obligation on what A ought to do = what A believes A ought to do. That was my “liberation” worry–we who know “better” know that the ignorant might follow their evidence and fail to do what they ought. I tend to think that ignorance excuses rather than subverts/erases obligation, something that’s hard to make sense of given the commitments of PST.

    January 4, 2011 — 10:47
  • Ted Poston

    I need to get a syllabus in as well! I was thinking that the liberation worry could be handled by making ought a function of knowledge. It’s hard to figure out what you know; ergo its hard to figure out what you should do. I don’t like the subjectivist move. I was also thinking that prospectivism could just say there’s one ‘ought’ and its a function of what you know. So you get the nice result that ignorance of a perfect being’s reasons don’t subvert knowledge of what you ought to do.

    January 4, 2011 — 11:25
  • Ted Poston

    That’s a possibility, but how does it subvert the subject’s obligation to act? The thought is that what a subject should do is a function of what they know. They don’t know of any specific pro tanto reason not to intervene and they know quite a few pro tanto reasons to intervene. They might think for all we know a perfect being will accomplish a great unknown good by allowing this evil. But to the extent this is a possibility so is this one: for all we know a perfect being will accomplish a great unknown good by our effort to prevent this evil. The two possibilities wash each other out.

    January 4, 2011 — 11:33
  • Luke Gelinas

    What if the evidential arguer re-raises the other premise of the argument at this point. (I take it ST is supposed to be a response to P1 of Rowe’s original argument.)
    Either skeptical theists accept P2, that pointless evil and the existence of God are inconsistent, or they don’t. If they do, then they are committed to thinking that, if an evil occurs, it is not all told for the worse. That seems capable of undermining the prospective ought. I know that if I don’t prevent this evil, and the evil actually materializes, it doesn’t make the world any worse, and may very well make it better.
    If the mere opportunity to prevent the evil is supposed to be the justifying good, that seems false for many serious evils. Alternatively, STs can claim that we have a moral duty to prevent harm, even when preventing harm will never be more optimific than permitting it. This is a cost for ST–its defenders are now saddled with controversial views about the correct theory of act-evaluation.

    January 4, 2011 — 12:55
  • Mike Almeida

    what you ought to do is what’s prospectively best (i.e., what maximizes expected value, prospectivism).
    So, I take it that the prospectivist would have to know what maximizes expected value, if he is to know what he ought to do. But you add,
    I take it that the prospectivist skeptical theist (PST) would say that what’s objectively best doesn’t matter to obligation, only what’s prospectively best (i.e., maximizes expected moral value).
    Take one example, to keep it simple, saving the fawn. Why would I think the expected value of saving the fawn in cases where I am able to do so is any different from the expected value of saving the fawn when God is able to do so? So I ought to prevent it iff. God ouoght to prevent it. Of course, I might systematically have the expected value of saving the fawm completely wrong in precisely those cases where God does not save the fawn. In precisely those cases, my estimate of the expected value of saving the fawn is wildly off, since there is this great good produced by unhelpful spectators that I’ve altogether missed. But I have absolutely no reason to believe that my assessment of the expected utility of being an unhelpful spectator is radically wrong in just the cases where the skeptical theist needs them to be wrong: i.e. in just those cases where God is the only one who might help, and he doesn’t.
    So it looks to me like I’d be torn in deliberation. I’m pretty confident that preventing something terrible is obligatory where I can do so. But, for all I can tell (I cannot even put a probabilitiy on it), I might be doing something radically wrong in helping out. If I could put an exact value and probability on the alleged good, I could calculate the expected utility. But all of this is beyond my ken.

    January 4, 2011 — 13:13
  • David Warwick

    ‘But to the extent this is a possibility so is this one: for all we know a perfect being will accomplish a great unknown good by our effort to prevent this evil.’
    … and it would mean that we’re not meant to make moral decisions based on our judgment or the facts of the case, our job is somehow to second-guess what an inscrutable God would like us to do, which is often counterintuitive.
    It makes a nonsense of the whole concept.
    As for imperfect knowledge: if we save a child from starving, and that child grows up to be Hitler, well gosh, I guess we facilitated more evil than good. But there are a lot more starving children than Hitlers. The odds are you’re saving someone who’ll grow up to be a decent human being. There have been precisely as many Beethovens as Hitlers. I may have saved Beethoven. If God alone knew the child I saved was inevitably going to grow up to be Hitler, and had a problem with my feeding him as a starving child … well, I’d have a dozen problems with that God, and a much easier time justifying my actions than His.

    January 4, 2011 — 13:26
  • Ted Poston

    Mike, Just a quick comment. The Mineshaft case undermines the thought that I ought to prevent x iff God ought to prevent x. In the case I don’t know which of the two mines the 10 miners are in. I can sandbag one mine completely saving all 10 if the 10 are in that mine. But I can distribute the sandbags evenly saving 9 regardless of where the miners are but definitely losing one. I ought to do the act that saves the 9. (we can set up the case in terms of preventions. Suppose someone in the same epistemic position is sandbagging mine A. I can easily prevent this. I should prevent this).
    What goes for the mineshaft case seems to go for the fawn case too. All the skeptical theist needs to do here is show how we are obligated to prevent the apparent evil while maintaining a skepticism about what God should do.
    BTW, I hope Clayton’s department chair takes blogging as a form of service to the profession! 🙂

    January 4, 2011 — 15:46
  • Mike Almeida

    What goes for the mineshaft case seems to go for the fawn case too
    Not that I can see. Look, I’m obviously not denying that God (or any other person, for that matter) might be in a different (better) epistemic position from me relative to some moral decision. Clearly that can and does happen. My worry does not depend on denying that. The problem (I might be repeating myself) is the claim that in all and only those cases C such that (i) God can prevent an evil E in C and I cannot and (ii) God does not prevent E in C, it is true that (iii) there is some greater good G such that G is unobtainable unless God permits E in C. That claim is absurdly ad hoc, and not available ot the skeptical theist. How does he happen to KNOW that when I’m in cases like C, there is no such greater good out there? How does he suddenly KNOW the probablity that there is such a good and its relation to the evils that I allow or don’t? The skeptical position is that this is exactly what we do not know. We don’t know the kinds of goods there are or their relations to the evils that exist. He cannot start claiming to know that there are no such goods when it is convenient for his argument.

    January 4, 2011 — 17:28
  • DL

    But there are two things going on: natural evils and moral evils. If I rob you and a millionaire comes along and gives you the same amount of money, that doesn’t mean I am no longer able to be arrested or imprisoned for my crime. If I commit some sin — say, selling you a defective anvil — God can repair any natural evil that may befall you as a result, but that does not undo the sin on my soul. So the skeptical theist might never have a compelling reason to avoid suffering any particular evil, but he always has sufficient reason to avoid committing evil.

    January 5, 2011 — 8:18
  • Ted Poston

    I agree with what you say here, but I don’t see how it undermines the prospective skeptical theist position. The original claim that I focused on was the claim that if ST1-ST3 are robust enough to undermine the evidential argument from evil then they are robust enough to undermine some ordinary moral reasoning (e.g., reasoning about our obligations). The analogy I gave showed that skepticism about another agent’s reasons for permitting some evil (reasons that very well may permit him in allowing the evil) doesn’t show that a person that is aware of this situation fails to have an obligation to prevent. I think the prospectivist explanation is helpful just to the extent that it puts some flesh on the proposal that an agent’s obligations are a function of what they know. The prospective skeptical theist thus has the resources to reply to the original claim. ST1-ST3 are robust enough to undermine the evidential argument from evil (see my earlier post) and this skepticism doesn’t undermine our obligation to prevent easily preventable (apparent) evils. Kosh?

    January 5, 2011 — 9:27
  • Ted:
    You’re right that x’s having reason not to prevent an evil doesn’t imply that y doesn’t need to prevent that evil. One kind of case where this is really clear is where x’s reason not to prevent the evil involves the value of leaving the evil for y to prevent.
    That said, I think the sceptical theist is committed not just to the claim that we don’t know what reasons God might have, but also to the claim that we don’t know what values the evil might or might not promote. So, how about this way of running the Almeida-Oppy line?
    1. You only have a non-special obligation to prevent an evil E if you have on balance reason to think that it would be no worse if E were prevented than if it were not prevented.
    2. If sceptical theism is true, then for no evil E do you have on balance reason to think that it would be no worse if E were prevented.
    3. Therefore, if sceptical theism is true, you have no non-special obligation to prevent any evil.
    By talking a non-special obligation, I mean to rule out special obligations arising out of relationships and promises. Thus, I might have an obligation to prevent an evil even though on balance it would be better if the evil happened, because I have a special duty to prevent that kind of evil or to protect the particular potential victim of the evil.
    I think your example leaves my version of the argument intact, no?
    (For the record, I think 1 and 2 are both dubious.)

    January 5, 2011 — 10:46
  • Ted Poston

    Alex, the prospective skeptical theist can say this: when I’m considering whether or not to prevent an easily preventable evil I can recognize that it would be worse if I don’t try to prevent it because I’d be knowingly failing an obligation. I think what’s nice about PST is that the skepticism about what values evil might or might not promote just doesn’t affect what obligations I have.
    There’s another aspect to this dialectic that I want to flag. We’ve talked a bit about an evil promoting a greater good. That strikes me as assuming something the skeptical theism might flag. At the very least, I’d want to leave it an open possibility that the evil itself serves no greater good but it is nonetheless one that God is justified in permitting because of some other things we don’t know about. Van Inwagen thinks that the expanded free will defense shows that for all we know God is justified in permitting some horrendous evils. But there are some particularly horrendous evils that serve no greater good. They just happen and it’s very unfortunate that they happen. On PvI’s story there’s a significant element of chance in the story for why that particular evil occurred. What’s more God could have easily prevented that evil. But God is justified–so the story goes–in permitting some horrendous evils and he has to draw the line somewhere between the ones he allows and the ones he doesn’t. On PvI’s story there’s arbritrariness in which evils occur. God could have prevented some particular horrendous evil and the world would have been no worse for that. But evils of that type are justified and one has to draw the line somewhere. I’m not endorsing this view, but it’s a possible type of view that I think the skeptical theist should leave open.

    January 5, 2011 — 13:18
  • David Warwick

    “we don’t know what values the evil might or might not promote”
    I’d agree with them there. I don’t understand the currency these transactions take place in.
    The mineshaft case … the currency is lives. It’s about killing the fewest. (Human lives – I suspect if one mineshaft had ten people in it, and the other had twenty fawns, we’d save the ten people, rather than nine people and nineteen fawns). There’s nothing there about ‘good and evil’, not directly.
    All this discussion is based on the idea that each action has an objective numeric value of ‘good’ and of ‘evil’. It seems like an extraordinarily naive view of the world.
    If it’s true, though, what possible reason would God have in not giving us the capacity to see the precise values? It would be asking us to get as many points as we can in a game without explaining the scoring system.
    If it’s somehow to ‘teach us a lesson’, I’d question the pedagogical technique, given that it’s been running for thousands of years and increasing numbers of people – over half of those in the UK – don’t even think there’s a teacher in the room.
    If we are to ‘maximize good’, why do we not have even a theoretical framework for putting a value on ‘good’? Is there not a case to be made that ‘good and evil’ are irrelevant here, that we can discuss ethics entirely in terms of effects? – This action requires X cost and saves Y QALYs.

    January 5, 2011 — 13:29
  • Mike Almeida

    Here are ST1 – ST3.
    (ST1) We have no good reason for thinking that the possible goods we know of are representative of the possible goods there are.
    (ST2) We have no good reason for thinking that the possible evils we know of are representative of the possible evils there are.
    (ST3) We have no good reason for thinking that the entailment relations we know of between possible goods and the permission of possible evils are representative of the entailment relations there are between possible goods and the permission of possible evils.
    All talk of the justification for evil in the dispute has been axiological. If God is justified in permitting certain evils, then (it has been assumed on all sides) it is because there is some good that follows from permitting those evils. So I don’t want to argue about alternative approaches to moral justification (as interesting as they might be) and how things might fall out differently. You say this,
    I think the prospectivist explanation is helpful just to the extent that it puts some flesh on the proposal that an agent’s obligations are a function of what they know.
    But this is not true. Prospectivism doesn’t help. No one is in a position to calculate expected utility since, by the hypothesis of skeptical theism, we are ignorant of (for all we know) very many, very great goods (ST1) and we are ignorant of their relation to the evils we permit (ST3). Not only do we not know what goods might follow from what I do, we don’t know how probable it is that the goods would follow from what I do.
    In order to calculate expected utility, I would have to be making a decision under what is called risk. This is situation in which I know what the possible outcomes of my actions are and I know the probability distributions under various hypotheses of action. I’m not in that sort of situation. I’m not even in the situation of uncertainty. In contexts of uncertainty, I know the possible outcomes, but I don’t know the probabilities. I’m rather in a context of ignorance: I’m not sure what the outcomes are and I don’t know how probable they are.
    So, prospectivism suggestion doesn’t help. It is useful only in contexts of risk. But given ST1 – ST3, I’m not in a context of risk. This is why I cannot decide what to do in the relevant moral situations.

    January 5, 2011 — 13:46
  • Mike Almeida

    I think what’s nice about PST is that the skepticism about what values evil might or might not promote just doesn’t affect what obligations I have.
    How can that be? The prospectivist has to calculate expected utility, and that is exactly what he cannot do, givne his epistemic situation under ST1 – ST3.

    January 5, 2011 — 13:49
  • Ted Poston

    Let’s see. The view I’m interested in says that one’s obligations are a function of what one knows. Maybe it’s a form of prospectivism; maybe not. I really am just interested in explaining the intuition in the original analogy. So let’s consider a skeptical theist considering some easily preventable (apparent) evil–e.g., a kidnapping. The theist knows various pro tanto reasons to prevent the kidnapping and knows of no specific pro tanto reason not to. Further, she believes with justification (but doesn’t know) ST1-ST3. What obligation does she have? Well, its a function of her knowledge. And so it’ll be a function of her specific pro tanto reasons. It looks like she should prevent the kidnapping. All of that is consistent with it still being the case that ST1-ST3 are strong enough to undermine the evidential argument from evil. Make sense?

    January 5, 2011 — 14:31
  • Mike Almeida

    So, here is the (or a) mineshaft case.

    Ten miners are trapped either in shaft A or in shaft B. Floodwaters are threatening to spill into the mine shafts. You have a limited number of sandbags—enough to keep water out of one of the shafts, but not both. Your evidence suggests the following: If you use all the sandbags to block shaft A, diverting all of the water into shaft B, then all the miners will be saved if they are in A, while none will be saved if they are in B. Conversely, if you use all the sandbags to block shaft B, then all the miners will be saved if they are in B,
    while none will be saved if they are in A. If you don’t block either shaft (or, ineffectually, try to spread the sandbags between both shafts), then the water
    will be divided equally between the shafts. No matter which shaft the miners are in, nine will be saved, although one miner, deepest in the shaft, will be
    lost. The evidence available to you supports neither more strongly, nor more weakly, that miners are in A than that they are in B.

    Notice that we know the possible outcomes for each option we might take. We know that is we block A there is a .5 probability that all are saved and a .5 probability that none are. Similarly for blocking B. Not blocking either has a certain outcome that 9 are saved. God knows that all of the miners are in A, so he will act differently. We will calculate expected value and block neither (saving 9).
    But this is not at all the situation ST1 – ST3 puts us in. Imagine that someone comes along (call him MB) and informs us that we really have no idea how many mine shafts there are or how they’re related to one another or how many people or things are in them. This is what ST1-ST3 do. They make it the case that we do not know how many miners are in the shafts or how many shafts there are or what might happen if we block one shaft or another, etc. It puts us in a situation where we do not know how bad it would be to block A, since for all we know there is some very great good (maybe the saving of a 100,000 people) in another mineshaft that would be lost if we block A.
    In a situation like this, it does not help if you’re a prospectivist, since you cannot calculate expected utility.

    January 5, 2011 — 14:32
  • Ted Poston

    “But this is not at all the situation ST1 – ST3 puts us in. Imagine that someone comes along (call him MB) and informs us that we really have no idea how many mine shafts there are or how they’re related to one another or how many people or things are in them. This is what ST1-ST3 do.”
    Mike, maybe we’ve posted over each other, but I don’t see how this affects the point I made. It looks like you think that ST1-ST3 results in the loss of knowledge of any pro tanto reasons. If that’s right, then I agree with you. But ST1, if true, is compatible with our knowledge of various pro tanto goods.

    January 5, 2011 — 15:01
  • Mike Almeida

    But ST1, if true, is compatible with our knowledge of various pro tanto goods.
    I’ve come to the conclusion that I have no idea what your position is. I thought your view was the prospectivist view. But I showed that this postion is useless in the revised mineshaft situation we’re in under ST1- ST3. Now you tell me I’ve said nothing that affects your position. So I’m entirely lost as to what it might be. I assume you agree that we cannot calculate expected utility as the prospectivist would do. So how do you propose to determine what to do in the revised mineshaft case? What obviously suitable alternative is there, given that we cannot calculate expected utility?
    It is not enough to say ‘our obligations are a function of our knowledge’. There isn’t any such plausible function. Let me show you what the problem is. Suppose you walk out to your garage. You don’t know whether there are dense gas fumes in the air, you don’t know there aren’t. You are told (ST1-ST3) that you’d really not be aware of the gas fumes if they were there. You’re trying to find the light switch, but can’t find it. Question: can you light a match to find it? Your answer: well, obviously, it’s a function of what you know. But that cannot be right. You do not know there are dense gas fumes in the garage sufficient to produce an explosion. Does that mean you can light away? Obviously, not. You also don’t know that there aren’t such dense fumes. What you ought to do is not just a function of what you know. It’s a function of what you don’t know as well. I don’t know whether I would cause an explosion, but I know that I wouldn’t know this if it were true. So I should not light the match. But I also don’t know that preventing this evil would not result in the loss of some very great good. But I know that I would not know this, were it true. Just as I should not light the match, given what I don’t know, I also should not prevent the evil.

    January 5, 2011 — 16:33
  • David Warwick

    “It’s a function of what you don’t know as well.”
    ‘I know there are factors I may not be aware of’. You can always operate honestly to the best of your knowledge, and seek to improve your level of knowledge.
    As for the gas in my garage, I know that:
    1. There are explosive gasses.
    2. They are not typically present in my garage.
    3. There are reasons they might be.
    4. If I have cause even to suspect that they are there, I need to exercise caution.
    5. We would all exist in a state of paralysis if we exercised total caution in all circumstances.
    6. Electric torches present far less of a risk if gas is present.
    I can, in other words, work around my areas of ignorance. I can, using examples in my life and those of others, adjust my behavior. If a particular strategy pays off for others, I can take that into account. If it’s a recurring problem, I can take measures to detect and ultimately solve it.
    If we have reached the potential limit of human knowledge of God, we, by definition, now have all we are going to get to work with. We are at the ‘to the best of our knowledge’ phase. People have different levels of risk aversion. Some will always prefer to stumble around in the dark to lighting the match.
    If no gas had ever been detected, after millennia, I think a reasonable response might be to not worry about the gas thing any more. It’s possible that there’s gas out there somewhere, there’s some great art about gas, in the past some wise men like Isaac Newton believed in gas, Pascal said it was wisest to act as if there was gas just in case there was, but gas should best be understood as a good story and the gas industry should probably not enjoy those tax breaks or be allowed merely to move their fitters to other dioceses when they’re caught abusing children.

    January 6, 2011 — 7:18
  • Ted Poston

    There’s a core view here that I haven’t seen reason to reject. The view says that our knowledge of various pro tanto reasons for preventing an easily preventable harm and our knowledge of no known pro tanto reasons not to prevent an easily preventable harm is what matters to determining obligation. Our ignorance of whether or not the goods we know of are representative of the possible goods there are (etc, for ST2 and ST3) doesn’t undermine our knowledge of what the pro tanto reasons are and so doesn’t undermine our obligation. As I understand your position it’s that ST1-ST3 imply that we have no knowledge of any pro tanto reason. I just don’t see how that follows. ST1, for instance, doesn’t undermine our knowledge of the goods we know about. Relieving suffering is good. ST1 doesn’t gainsay that. It just says that we are ignorant of whether or not this and other known goods are representative of the possible goods there are. The purpose of the analogy I gave was to point out how ignorance of another’s reasons–even under the reasonable assumption that the other has justifying reasons–doesn’t change one’s obligations.
    Re prospectivism: maybe it’s a good way to develop the intuition I’m after; maybe not. But I’m not completely convinced that in the revised mineshaft case we can’t calculate the expected utility of acting. Here’s how I understand your view. Assume something like ST1-ST3 for the mineshaft case. What does that amount to? Let’s assume that we are ignorant of whether the evils we know about in the mineshaft case are representative of the possible evils there are. Suppose for instance we can’t assign a probability to this evil: water rushing into the mine causes a drastic change in air pressure leading to a total mine collapse. How should this affect the calculation of the expected utility? It’s not at all clear to me that it should change it at all. Why? In brief, for everything we’re ignorant of that is a reason not to act, we can describe something else that we’re ignorant of that would be a reason to act. So we have a mutual destruction of ignorances (so to speak). I think this supports the claim that when we are trying to figure out the expected utility of acting we should use only those things we know to represent the decision framework.

    January 6, 2011 — 9:51
  • Ted:
    I think the point that there may be reasons to permit that aren’t greater-good reasons is an important one to keep in mind. For instance, if God doesn’t have middle knowledge and incompatibilism is true, then God could have reason to permit me to do wrong, not for the sake of a greater good, but because if he did not permit me to do wrong, I couldn’t have freely done right. And in any case where I am capable of preventing an evil, I need to take seriously the possibility that the good that God is trying to produce is the good of my freely preventing an evil that would have happened had I not prevented it.
    However, it’ll still be a part of standard ST that we don’t know what does or does not serve a greater good, and it seems that that’s all that’s needed to generate the problem.
    But I grant that there are cases where there is a special obligation to prevent an evil, and in cases like that your argument may well work. So at least we get the fact that total moral scepticism does not result from ST.

    January 6, 2011 — 9:53
  • Here’s a thought experiment that I think is relevant. Suppose there is a very complicated machine, built by someone whose motivations you know nothing about, and there are a hundred strangers hooked up to the machine. Examining the machine a little tells you that it is capable of bestowing incredibly great harms and incredibly great benefits on the people hooked up to it. There is a great number of buttons on the machine. You can see that in certain circumstances, pressing a button could cause great suffering to all the strangers. The whole thing is a big mess of wires. Finally, after a lot of work, you realize that the machine is about to administer ten minutes of great pain to stranger number 20, and that if you press the mauve heptagonal button, the machine will do something else. You don’t know what that something else is, but you know it’s not the administration of ten minutes of great pain to stranger number 20 soon. Maybe it’ll be the administration of twenty minutes of worse pain to stranger 20 tomorrow; maybe it’ll be the bestowal of a great benefit on stranger number 7; maybe it’ll be some combination of these; maybe the machine will blow up and kill everyone; you just plain don’t know.
    Speaking for myself, I am torn here between two opposed answers: you are obligated to press the button; you are prohibited from pressing the button.
    I have an intuition that given a machine like that, I should leave it alone. For if I act and harm results, then I was a positive cause of the harm, and I don’t want to be that.
    On the other hand, I feel the force of the following argument: If you press the button, number 20 doesn’t feel pain for the next ten minutes, and all sorts of other things happen that you don’t know anything about. If you don’t press the button, number 20 feels pain for the next ten minutes, and all sorts of other things happen that you don’t know anything about. No-pain + stuff-you-don’t-know-anything-about is preferable to pain + other-stuff-you-don’t-know-anything-about.
    It’s worth noting, too, that the problem also occurs for some naturalistic views. For instance, it is not implausible that the universe exhibits a lot of chaos. Chaos magnifies small causes into big effects. You take an aspirin, and a thousand years later an earthquake occurs in Los Angeles, which would have occurred in the middle of a desert had you grinned and bore the headache. Given enough chaos, we won’t have reason to think that preventing a small evil will be on balance worthwhile.
    I also have the intuition that a universe designed by God isn’t like the machine I described above, and that any hypothesis of chaos needs to be tempered with a belief in divine providence. But these are intuitions that, I think, the sceptical theist may not be able to use.

    January 6, 2011 — 10:00
  • John Alexander

    I have a question,actually a bunch of them. Why is it assumed that God has knowledge regarding ethics that we do not possess, or is what is being assumed only that God has knowledge of outcomes that we do not possess, but that our knowledge regarding ethics is identical? Does ST rest on accepting a consequentialist utilitarian ethical theory as well as compatibalism regarding free-will? I take it that the existence of the particular evil is the only plausible explanation for how the greater good could come into existence. But this explanation does not seem correct for reasons Mackie and others have given.
    Another question. Let us assume that an evil E is necessary for a greater good G to occur. God knows that E is going to happen but that it is necessary for G so He allows it to happen. Imagine that a creature like us, S, also sees E but does not see the connection between E and G. S decides to stop E from happening. Should God stop S from stopping E from happening? If God does stop S he must have a reason from doing so, possibly that stopping S will allow G to exist. But stopping S violates S’s free-will. But some have argued that the reason why God allows evil to exist in the first place is that it is the result of actions knowingly and freely performed by moral agents like us. If God stops S then what does this do to the free-will defense and if God allows S to stop E what does this do to the idea that because E is necessary for G that E should be permitted to occur?
    Finally, assume that S murders S’. Assume that S argues that God told him to do so because because killing S’ is necessary for G to occur? People claim to hear God talking to them all the time, so even if I am not in position to directly verify this command, it seems possible that such a command could be given. And as we all know we should do as God commands us to do. If we punish S could it not possibly be the case that we are punishing S for doing precisely what he ought to do?

    January 6, 2011 — 10:04
  • Donald Smith

    Just wanted to say that I think Dan Howard-Snyder’s forthcoming (or perhaps it’s out now) paper in Oxford Studies of Phil. Religion, ‘Epistemic Humility, Arguments from Evil and Moral Skepticism’, which you can download from Dan’s website, is relevant to the interesting discussion here. Dan, for instance, discusses the important point, noted by Professor Pruss above, that similiar problems/questions/issues here apply to naturalism as well.

    January 6, 2011 — 13:26
  • David Warwick

    “Speaking for myself, I am torn here between two opposed answers: you are obligated to press the button; you are prohibited from pressing the button.”
    There is only one rational response: this is a device built by a psychopath and you should not play this game. I would do my best to escape from it, free others from it and track down and stop its designer.

    January 7, 2011 — 6:57
  • David:
    Yeah, so your reaction is similar to mine–that a good God wouldn’t make the universe be like this machine. And this gives a new argument against ST.

    January 7, 2011 — 8:31
  • Ted Poston

    John, One quick comment: Phil Quinn has a nice essay on Abraham’s choice when God commands Abraham to kill Isaac. Phil makes the point that he can’t bring himself to believe that God would really tell him to do such a thing. If someone thought God was telling them to kill someone else, they’d not be in a position to *know* that God is saying that. Also, re your other questions: the main issue concerns with ST1-ST3 undermine moral reasoning if they are robust enough to undermine the evidential argument from evil. ST1-ST3 just talks about our ignorance of various possibilities. It doesn’t require consequentialism. It suggests that for all we know God might have knowledge of various good-making or bad-making properties that we don’t know about.

    January 7, 2011 — 8:41
  • Ted Poston

    That’s a diabolical thought experiment! I have the same intuitions you have about the case. fwiw: here’s a consideration that you should press the button. Suppose that if you press the button then number 20 doesn’t feel pain for the next 10 minutes and the machine goes into state m. Also, if you don’t press the button then #20 feels pain for the next 10 minutes and the machine goes into state m. In this case what should you do? Press the button. You don’t know that state m is but it will occur regardless of what you can do and you can definitely do something to prevent some harm. Next case: if you press the button then #20 gets relief and the machine goes into state m. Or you don’t press the button, #20 gets no relief and the machine goes into state n. In this case my intuitions are shifty. But I feel some pressure from this line of reasoning. Look, you don’t know anything about m or n. For all you know, m=n. But you do know that there’s something you can do that will be good–relieving suffering. It might turn out state m is that each person gets a piece of chocolate. Then again, it might be that state n is that each person relieves an electric shock. You just don’t know. So the act that has the greatest expected utility is to press the button. (Intuitions are clearer here if we make the evil #20 suffers more drastic. Suppose #20 will be killed–verdict: you should press the button. Suppose #20 will lose a limb–verdict: press the button; Suppose #20 will get kicked in the shin–unclear what do to.

    January 7, 2011 — 9:00
  • Ted Poston

    Alex: You said above that “it’ll still be a part of standard ST that we don’t know what does or does not serve a greater good, and it seems that that’s all that’s needed to generate the problem.” What problem do you have in mind? I’ve focused on the problem that we don’t have an obligation to prevent easily preventable (apparent) evils if ST1-ST3 are robust enough to undermine the evil argument from evil. I think I’ve shown that *that* isn’t a problem.
    I learned yesterday that Mike Bergmann has a forthcoming paper that makes similar points to the one’s we’ve discussed. Here’s a reference: “Commonsense Skeptical Theism,” Science, Religion, and Metaphysics: New Essays on the Philosophy of Alvin Plantinga, eds. Kelly Clark and Michael Rea (Oxford University Press, forthcoming).
    Mike: could you say whether or not you think skeptical theism is consistent with knowledge of pro tanto reasons? If it’s not, then I agree with your line in Almeida & Oppy (2003). But if ST is consistent with knowledge of things like ‘relieving suffering is good’ then I don’t see why the STer can’t say what we’ve been saying.

    January 7, 2011 — 9:09
  • David Warwick

    “Yeah, so your reaction is similar to mine–that a good God wouldn’t make the universe be like this machine. And this gives a new argument against ST.”
    But theism would still agree with the basic premise – that the universe is a machine God built for a purpose and by pressing different buttons, we can create different results, some of which will be harmful to others, some of which will be harmful for others even if we can not be reasonably expected to know that.

    January 7, 2011 — 9:52
  • John Alexander

    Ted
    Does this not beg the question? If we do not think that God is telling Abraham to kill his son because we cannot bring ourselves to believe that God would tell us to kill an innocent person, how would we know we are in the correct position to know what God wants us to do – if God is only telling us to do what we already think we know what we ought to do – we know we should do x and God is telling us to do x. If we know that we should do x and we think that God is telling us to do -x then we are mistaken about what God wants us to do. Why do we need God? However, I take it that the significance of Abraham’s choice is that it is one that seems to require him to revise what he thought God would tell him to do and that we should do what God commands us to do because God would only command us to do what we ought to do.
    We are continually faced with having to decide what to do. The famous ‘baby in the pond’ case is an example, which we can extend to sending money to save a child in a distant land. We know that saving an innocent life is a good thing to do, at least we seem to reason from this starting point – it is a shared intuition if you will. However, if the ST is correct we cannot know that this is so because we are not in a position to see all the good-making or bad-making properties (whatever that means, I prefer good or bad outcomes) For example, if I save the baby in the pond I may not be in a position to save a bus load of children because I come to that scene to late to act because of the time I spent saving the baby. This places us in a ‘Buridian’s Ass’ position where we cannot make a choice on whether to save the child or not (or in Alex’s scenario to push the button or not). But, I have no reason to think that I am going to be in a position to save a bus load of children while I do know that I am in position to save the baby’s life. I do not have to accept the mere possibility that I could be in a position to save a bus load of children’s lives if I do not save the bay in the pond as having any epistemic value in determining what I think I do know, namely that I should save the baby, anymore then the mere possibility that I might be dreaming that there is a baby in the pond forces me to take this into account in determining what I do know and what I ought to do re the baby. It seems that from how we do reason about morality that either ‘defense’ for not saving the baby is laughable and would be rejected as absurd.
    Here is the rub (for me); it is true that if God knows everything that He knows things that we do not. However, from this it does not follow that we lack the ability to know some of the things He knows that we do not. If God has a reason for allowing evil to exist that we do not know, it is still plausible to think that we could understand the reason if He gave it to us, especially if we already possess some knowledge of what is required to be a morally good agent. This leads me to wonder why He does not give us the reason. If we lack the ability to understand then how can He hold us accountable for our actions. If we do not lack the ability then what is He trying to hide and how is His not telling us the reason consistent with being a completely good moral agent. I take it that a good moral agent would explain why he or she acts as he or she does when asked to do so.

    January 7, 2011 — 10:40
  • David Warwick

    The mineshaft,
    How would God answer the mineshaft question?
    The theist would argue that whatever course of action God chooses, it would be the perfect answer. I think the corollary of that has to be – all things being equal – God would always make the same choice.
    There are two levels of answer, here.
    1. What God would do (actually does, as the two are synonymous).
    2. What God would want us to do.
    (2) will inevitably be ‘the right thing to do’.
    Given absolute moral truth, both (1) and (2) should lead to the same answer.
    As God has not intervened, God wants all the miners to die, even though we could save at least half of them.
    But … wait, the theist might argue, God wants us to rescue the miners ourselves.
    Say that, as poor luck would have it, there are two identical disasters simultaneously, exactly conforming to the mineshaft dilemma. The mine rescue team is so busy with the first, they don’t know about the second.
    Let’s also allow that mine accidents are usually infrequent, that the industry is well run, the area is geologically stable, that all precautions were taken, the miners are honest and dutiful men, that the rescue team is as well-equipped as can be expected and so on. (To rule out spurious ‘God is punishing greedy miners’ / ‘God wants us to regulate the mining industry better’ explanations).
    God knows about the second disaster, and knows the rescue team don’t know. He saves the miners, but only in the second disaster (empirically, God does not do this – otherwise, the smartest thing to do in the event of a heart attack would be to stay as far away from possible from a doctor).
    Or he does nothing to save the second of set of miners (which is consistent with his actions over the first). He does not even tell the rescue team about them. They die.
    What’s clear is that God has not left clear instructions. We could argue that even within one religious tradition, there are confused messages, or ambiguous ones, or gaps. God has an answer, it’s consistent, and as it’s God, we should accept it’s correct. We don’t know what it is.
    If we’re to muddle along, do our best, use our judgment … With the mineshaft dilemma, would it make a difference if the person making the decision was a theist or atheist? What about the miners – if one group were agnostics and the other group were devout Christians, would God prefer us to drown one lot to save the other? how about agnostics and devout Muslims? How about there was one devout Christian, so you could guarantee his safety by flooding the other chamber, but ran a chance of drowning him if you sandbagged both?
    Are there any Christians here prepared to say God wouldn’t want there to be a religious test when you’re saving disaster victims?

    January 7, 2011 — 10:43
  • Ted Poston

    John: A quick comment. You write, “We know that saving an innocent life is a good thing to do, at least we seem to reason from this starting point – it is a shared intuition if you will. However, if the ST is correct we cannot know that this is so because we are not in a position to see all the good-making or bad-making properties (whatever that means, I prefer good or bad outcomes).”
    I looks like you are saying that skeptical theism is inconsistent with our knowing that saving an innocent life is a good thing. But there’s no inconsistency here. Skeptical theism isn’t the view that we lack moral knowledge. Rather it’s the view that we are ignorant about the limits of our axiological (and moral) knowledge, especially vis-a-vis figuring out God justifying reasons for permitting horrendous evils. As I’ve been saying, the skeptical theist can recognize that we know various things have have genuine moral weight. We know that relieving suffering is a good thing. And if we’re in a situation in which we know we can perform an act that would relief a lot of suffering then–ceteris paribus–we should do that. In Mike’s fawn case earlier on in his dialectic we can recognize that we ought to prevent the fawn’s suffering if we can. Still, if it occurs (regrettably), it’s consistent with our obligation to prevent it that there’s a beyond-our-ken justification for God’s permitting it.

    January 7, 2011 — 11:30
  • David Warwick

    John,
    “it is true that if God knows everything that He knows things that we do not”
    Reading your post, it strikes me that limited beings like ourselves have to cut our losses and make decisions based on our limited knowledge. Perhaps an omniscient being inevitably becomes stuck in the scenario you describe – endlessly extrapolating all possibilities, until the end of time. Also presumably he’s trapped by predestination paradoxes and by various other paradoxes that come from being infinitely X. Perfect knowledge, when it comes to decisions and taking actions, has to be a handicap.

    January 7, 2011 — 13:50
  • Mike Almeida

    Relieving suffering is good. ST1 doesn’t gainsay that.
    To provide yet another counterexample, take any case in which pushing button B0 will keep one person from a minor painful experience, but it might kill 1 billion people, you just don’t know, and you don’t know the probability that it will kill the 1 billion. Pushing B1 won’t hurt anyone and won’t help anyone. The moral position that of course you should push B0, since you know that someone is thereby helped and nothing “gainsays that”, is mindnumbingly mistaken. You might consider stating whatever position it is you’re defending. These suggestions/hints of that position are just painfully easy to counterexample.

    January 7, 2011 — 20:49
  • John Alexander

    Questions for Mike. I am trying to make sense of all the various threads here. When you speak of unimaginable goods or evils are you claiming that we cannot, in principle, imagine them, or that we have not yet imagined them but have the capability to do so as we learn more? Additionally, are these unimaginable evils or goods variations on specific types of evils or goods that we have some knowledge of, i.e., rapes, murders, tortures, caring. compassion, aiding, etc., or new types of evils pr goods. If the latter how would we recognize them as being evil or good unless there was some characteristic that all types of evil or good possess? If there is a common characteristic then does this count against ST1-ST3?

    January 8, 2011 — 2:18
  • Ted Poston

    Now, come on, Mike; you’re starting to sound like Glenn Beck. If you’re going to say that a view is ‘mindnumbingly mistaken’ and that the view is ‘just painfully easy to counterexample’, you should make sure that your remarks are coherent and that you indeed provide a counterexample. In your counterexample, one knows that act of pressing B0 might kill a billion and one knows that the act of pressing B1 won’t hurt or help. I think there’s more to be said about the ties between knowledge and obligation just as there is a lot to be said about the ties between knowledge and assertion and knowledge and practical reasoning, but those remarks, while interesting, aren’t central for the state of the dialectic.
    Independent of all this, there’s a core issue here that I haven’t seen reason to reject. I won’t repeat all the things I’ve said but I’ve maintained that skeptical theism doesn’t undermine our knowledge of pro tanto reasons like (as you quote but don’t reply to) ‘relieving suffering is good’. You seem to reject that. I want to know why?

    January 8, 2011 — 7:59
  • Mike Almeida

    I’m only sounding like Beck because you’re sounding like Oprah…:). It’s just talk. For every proposal that you’ve made, I have offered a counterexample. For your initial case (far above) I show the relevant disanaloies to the case Graham and I offer here January 3, 2011 3:01 PM | Reply. For the prospetivism position, I offer a counterexample at several places including January 5, 2011 2:32 PM | Reply. For the current position I offer a counterexample here January 7, 2011 8:49 PM | Reply.
    But you add this,
    I’ve maintained that skeptical theism doesn’t undermine our knowledge of pro tanto reasons like (as you quote but don’t reply to) ‘relieving suffering is good’. You seem to reject that. I want to know why?
    Return to the button pushing case. If ST1-St3 are true, then moral decisions are all like this following.
    1. If I push B0, then I will relieve a minor pain for S.
    Now it seems obvious that you should push B0. But the skeptical theist tells us that we have left something out of the deliberation. There are unknown goods and evils and it is unknown how they are causally related to the actions we perform. Skeptical theism entails that, for all I know, there is some terrible outcome associated with pushing B0. For all I could know pushing B0 will kill 1 billion people.
    Now take that position seriously for a second. (1) is then replaced with (1′).
    1′. If I push B0, then I will relieve a minor pain for S but, for all I know, I will also kill 1 billion people.
    Now, with regard to (1′), you don’t know how probable it is you will produce the terrible outcome. The chances might be 1, or .8, or .2 or maybe 0. You just don’t know. Skeptical theism leads me to this conclusion. Two conclusions.
    C1. If I take skeptical theism seriously, I will not push B0.
    C2. I should push B0 rather than take skeptical theism seriously.
    When you say you have a ‘pro tanto reason’ to relieve suffering, you might mean the following:
    C1′. Even if I take skeptical theism seriously, I have better reason to push B0 than not to.
    (C1′) is the position I took you to be defending. That’s the view that’s false. Or is your view (C2′)
    C2′. I have better reason to relieve suffering than I have to take skeptical theism seriously.
    I think (C2′) is true, or at least harmless.

    January 8, 2011 — 8:54
  • Ted Poston

    Very funny. Did I mention I was thinking of having a book club….? I hear George Soros has written some very good books…
    You say that the STer says that ‘there are unknown goods and evils’ But the STer says that ‘for all we know there are unknown goods and evils’. Also, there’s a “with respect to” claim in ST. It’s the claim that “for all we know there are unknown goods and evils with respect to justifying a perfect being in permitting inscrutable evils’. I just don’t see how the buttoning pushing analogy is apt for how an STer sees the moral decision. Here’s the picture: we know that some considerations have genuine moral weight. These considerations can be overridden by other considerations. And for all we know, the goods, evils, and the entailments between them aren’t representative of all the goods, evils, and entailments there are. I think that in framing a moral decision we should let the stuff that has genuine moral weight frame the situation for us. Even on a consequentialist ethic there’ll be lots of unknown consequences of our action. J.S. Mill held that there is the evaluation of the act–does it have the best consequences–and the evaluation the agent–did the agent act rightly. I’m considering the property of moral obligation which I take it is the second evaluation. So the STer says that when we are determining our obligation we need to take into account the stuff we know (and, if this different) the reasonable expectations of our acts. (This is Bergmann’s view, fwiw, in an unpublished paper). Now, how does this apply to the button pushing case? I have two possible acts: push B0 that alleviates some suffering but very well may produce tremendous harm or push B1 which does nothing. In light of the fact that the possible suffering far outweighs the alleviation of the minor suffering, I should not push B0. that’s the right verdict in the button pushing case, no? So, now, how does this change once we take into account ST? The STer thinks when we are considering whether or not to prevent some easily preventable harm we have to bear in mind that for all we know, there are goods and evils and entailments that may factor into the property of a perfect being being justified in permitting harms like this. I just really don’t see how this thought affects the moral framework concerning my obligation. I know that I can prevent this harm. I know that this harm is bad. I know that for all I know there may be factors beyond my ken that would justify a perfect being in permitting this. Still, I should prevent the harm. What part in that picture do you reject? (Apologies if this is rushed; I’ll be out for the rest of the day and I wanted to get this out before Monday).

    January 8, 2011 — 9:23
  • David Warwick

    “I think that in framing a moral decision we should let the stuff that has genuine moral weight frame the situation for us.”
    How do we assess the ‘weight’ of morals?
    I think slavery is a bad thing.
    Jesus said we could keep slaves, indeed that slaves should be grateful to their masters and it was appropriate for masters to whip slaves. Paul sent a slave back to his Christian master. Timothy says we should consider ourselves the slaves of God.
    Do you think my opinion has ‘more moral weight’ than the Son of God?

    January 8, 2011 — 10:27
  • Mike Almeida

    Ted, first, you’ve misstated ST1-ST3. I will quote them again. These are not stated relative to God, they are flat statements about what we know.
    (ST1) We have no good reason for thinking that the possible goods we know of are representative of the possible goods there are.
    (ST2) We have no good reason for thinking that the possible evils we know of are representative of the possible evils there are.
    (ST3) We have no good reason for thinking that the entailment relations we know of between possible goods and the permission of possible evils are representative of the entailment relations there are between possible goods and the permission of possible evils.
    You note this,
    In light of the fact that the possible suffering far outweighs the alleviation of the minor suffering, I should not push B0. that’s the right verdict in the button pushing case, no?
    No, Ted, that’s the wrong decision in the button pushing case. I think this is the point that is getting missed. First, let me emphasis that the button-pushing argument is based on what Bergmann says in ST1-ST3 (see above).
    We suppose that (1) is true in some situation.
    1. If I push B0, then I will relieve a minor pain for S.
    So far forth, it seems obvious that you should push B0. But the skeptical theist tells us that we have left something out of the deliberation. By ST2 I learn that I have no reason to believe that there aren’t terrible evils out there I can hardly conceive of. By ST3 I learn that I have no reason to believe that such terrible evils are not going to result from my pushing B0. So skeptical theism entails that, for all I could know, there is some terrible outcome associated with pushing B0. For all I could know pushing B0 will, for instance, kill 1 billion people.
    Add to your deliberation about what to do that, for all you could know, pushing B0 will kill 1 billion people. (1) is then replaced with (1′).
    1′. If I push B0, then I will relieve a minor pain for S but, for all I know, I will also kill 1 billion people.
    Given (1′), I think there are only two conclusions that a decent moral agent could reach. First, if I actually believed what the skeptical theist is telling me, I would never take the risk of doing such harm.
    C1. If I take skeptical theism seriously, I will not push B0.
    But it’s just crazy (isn’t it?) to come to the conclusion that I should not push B0. Contrary to what the skeptical theist is telling me, I know that pushing B0 will not kill 1 billion people. So, I draw the conclusion in C2.
    C2. I should push B0 rather than take skeptical theism seriously.
    So, when you say you have a ‘pro tanto reason’ to relieve suffering, you mean either C1′ or C2′. C1′ is false. If you took seriously the idea what you do not know that you won’t kill 1 billion people by pushing B0, you would not push it. I take it we agree about this.
    C1′. Even if I take skeptical theism seriously, I have better reason to push B0 than not to.
    So, you must mean C2′.
    C2′. I have better reason to relieve suffering than I have to take skeptical theism seriously.
    On J.S. Mill, it’s fair to say that we can distinguish between agent-evaluation and act-evaluation. But Mill thinks that we are never in situations where we don’t have a very good idea what the consequences of our actions are. He is at pains to say that we have lots of evidence (especially historical evidence) for what consequences follow from which actions. So the whole debate about endless (and unknown) consequences does not arise for him. The whole idea that we are in a skeptical posiion relative to the outcomes of our actions is entirely foreign to Mill.

    January 8, 2011 — 10:43
  • John Alexander

    Ted
    I am not being clear. I do not mean to imply that we do not have knowledge of what we ought to do. i.e., save an innocent live, alleviate suffering etc. What I mean to suggest is that if ST is true then we do not know if we ought to act upon what we know. Let us assume that all agree that we ought to save an innocent life all things being equal. It is the ‘all things being equal’ that I think possess the problem. We know we should save the life of the baby in the pond unless there is an over-riding reason not to do so. We cannot think of a reason that justifies us in not saving the baby so we know that we ought to save it. As I understand Alston,Ahern, Wykstra, and some others, it seems that they are relying on the possibility that God has a sufficient reason for allowing evil to exist that we cannot rule out because of our epistemic immaturity, or lack of epistemic ability/access, to counter the evidential problem of evil. So the ST agrees that we do know that we should save an innocent life all things being equal, but that if God does not save the baby then all things are not equal.
    This possess the following problem for me, maybe not anyone else: when we think we know that all things are equal, how do we know this? (This is a relevant question re ST1-ST3.) We look at the evidence that is available to us. 1) We know there is a baby in the pond. 2) We know that we can save it. 3) We know that if we do not save it we will bring about an evil. 4) we know we should eliminate evil, all things being equal. And 5) we are not aware of any overriding reason not to save the baby. 1-5 seems to suggest that know that we ought to save the baby. However what reason do we have for thinking that 5 is correct? You claim in replying to Mike “that I have two possible acts: push B0 that alleviates some suffering but very well may produce tremendous harm or push B1 which does nothing. In light of the fact that the possible suffering far outweighs the alleviation of the minor suffering, I should not push B0. that’s the right verdict in the button pushing case, no?” (We can all make the necessary adjustments so this fits my case.) I think that “In light of the fact that the possible suffering far outweighs the alleviation of the minor suffering,” is not correct. In an earlier comment, I suggested that it is possible that by saving the baby I might not be able to save the lives of a bus load of children because I will come upon the scene to late to avert the accident from happening. But, can I use the mere possibility that there might be a bus load of children whose lives I could save by not saving the baby as a sufficient reason to justify me in not saving the baby if I see it in the pond? You position seems to suggest that I ought not to save the baby (not push button Bo). I believe that most people would think that the mere possibility of an evil happening does not alleviate our responsibility to eliminating a known evil from happening. The framework behind this is the idea that our epistemic practices can handle the problem of skepticism, or at least Cartesian skepticism, and by extension, ST. If it is not part of our epistemic practice to accept the defense that I did not save the baby because it is possible that I might be dreaming, then it seems reasonable to reject the defense that I did not save the baby because there might be a bus load of children that I could save later on. I do not have any evidence that I am dreaming anymore then I have any evidence that there might be a bus load of children whose lives I might save.
    It seems to me (as I continue to think about this issue) that ST is irrelevant to what we ought to do. I have suggested in other posts that ST is a hollow victory in so far as it simply makes the claim that if God is all-knowing, all-powerful, and completely good then He has a morally sufficient reason for allowing evil to exist. And because we lack epistemic access to God’s thinking we cannot rule out the possibility that God exists, etc. Now an interesting issue is, is what ST1-ST3 assert still true regardless of the epistemic status of ST itself. I await Mike response to my questions, but I do think they are false because I think there must be a common characteristic that makes something evil or good and that if we know what this characteristic is then it matters not if we know all types of evil or good.

    January 8, 2011 — 11:32
  • Mike Almeida

    John,
    When you speak of unimaginable goods or evils are you claiming that we cannot, in principle, imagine them, or that we have not yet imagined them but have the capability to do so as we learn more?
    I don’t claim that there are such goods, the skeptical theist does. And it is a good question to ask just what is meant by ‘unimaginable’. I always take them to mean good things that I could not in principle know about.
    Additionally, are these unimaginable evils or goods variations on specific types of evils or goods that we have some knowledge of, i.e., rapes, murders, tortures, caring. compassion, aiding, etc., or new types of evils pr goods.
    They seem to be something altogether different. They’re goods we are entirely in the dark about.
    If the latter how would we recognize them as being evil or good unless there was some characteristic that all types of evil or good possess?
    I don’t know. Maybe part of the point is that we would not recognize them (whatever they are) as goods or evils even were we presented with them.
    If there is a common characteristic then does this count against ST1-ST3?
    I imagine it would not help skeptical theism to admit that every evil/good there might be is somehow related to the goods/evils we know about.

    January 8, 2011 — 12:02
  • Mike Almeida

    But, can I use the mere possibility that there might be a bus load of children whose lives I could save by not saving the baby as a sufficient reason to justify me in not saving the baby if I see it in the pond?
    John, you construe the skeptical theist’s problem as analogous to cases of opportunity costs. It’s not. The drowning baby case is misleading, since my intuitions in such cases are non-consequentialist. But the whole debate for and against skeptical theism involves consequentialist reasonining. Let’s use the case anyway, under the assumption that some disvalue outweighs the disvalue of letting the child drown.
    The skeptical theist informs us that, for all we know in the pond case, saving the child from drowning will cause 1 billion other children to die in the most gruesome ways. I’m supposed to be completely in the dark about the causal relations holding between what I do and what horrible things follow from it.
    Now suppose I place button B0 in front of you and say: if you press B0, one child will be saved, but for all we know you will cause 1 billion children to die in the most gruesome ways.
    Here are some questions you might ask:
    1. What are the chances that I cause those gruesome deaths?
    Ans: We just don’t know, it is anywhere from certainly true to certainly false. We cannot put a probability on the outcome.
    2. How often does it occur that saving a child causes such gruesome deaths:
    Ans: We just don’t know. It might be happening every time. It might be happening some of the time. We are simply in no position to judge the outcomes of our actions, since so many of the outcomes are beyond our ken.
    3. What should I do?
    Ans: What you should do depends on what you can reasonably anticipate to be the consequences of your action. But you simply cannot reasonably anticipate that the consequences will be good or that they will be terrible.
    4. Can’t I just blamelessly base my actions on the conseuquences I do know?
    Ans: It’s just not responsible to base your actions on the known consequences of your action. This is pretty easy to see. Consider the garage case (above), but suppose you’re entering your neighbor’s garage. You do not know that lighting a match to find the light switch in the garage will cause an explosion. And if it would cause an explosion, you would not know it now. You know that much, and you know that an explosion is a possible outcome. But what outcomes will occur? If you light the match, one good thing will happen: you’ll find the light switch. So, can you blamelessly base your decision to light the match on the known outcome of lighting the match? It’s pretty clear that that would be irresponsible.

    January 8, 2011 — 13:19
  • Ted Poston

    Mike, I didn’t misrepresent ST1-ST3. As Bergmann and Rea observe in their response to your article, representativeness for induction is relative to a property. So when we consider “(ST1) We have no good reason for thinking that the possible goods we know of are representative of the possible goods there are” we have to fill that out by saying what property is at issue in the representativeness claim. And as Bergmann and Rea say, it’s with respect to the property of figuring in a God justifying reason for inscrutable evils. Here’s the quote:
    “First, note that a sample of xs can be representative of all xs relative to one property but not another. For example, a sample of humans can be representative of all humans relative to the property of having a lung while at the same time not being representative of all humans relative to the property of being a Russian. For ST1 – ST3, what we are interested in is whether our sample of possible goods, possible evils, and entailment relations between them (i.e., the possible goods, evils, and relevant entailments we know of) are representative of all possible goods, possible evils, and entailment relations there are relative to the property of figuring in a (potentially) God-justifying reason for permitting the evils we see around us. Although that property is not explicitly mentioned in ST1-ST3, it is representativeness relative to that property that ST1-ST3 are speaking of.”
    Re the button pushing case, I see now how you intended to set it up. So I reject C1. Taking skeptical theism seriously, doesn’t imply that the morally relevant properties we know about relative to our decision about what to do are swamped by our ignorance of the possible good-making (etc) properties relative to what may justify a perfect being in permitted evils. The proper way to fill out the button pushing analogy would be this: I can push B0 relieving some suffering but, of course, *if* the suffering occurs God very well might have some justifying reason for permitting it that I don’t know about. It’s also true that if I do press B0 then I might be preventing a small harm but bringing about a much greater evil. But I think that’s true for most every action we consider, and (a) it’s not a special problem for ST and (b) there’s a mutual destruction of ignorances here (like I mentioned before).

    January 8, 2011 — 14:25
  • DL

    I reiterate my position that skeptical theists might never have a compelling reason to avoid suffering evil, but always have reason to avoid committing evil. (If that doesn’t work for a consequentialist morality, then so much the worse for consequentialism, though I don’t think even that follows.) Ted is surely right about the problem cancelling itself out: for any skeptically theistic reason I have to perform X, I have an exactly equal reason to perform not-X. The net effect on my decision-making is nil.
    Here’s another way to look at it: the suggestion is that we shouldn’t stop a given evil because maybe God wants that evil (in order to accomplish some greater good that perhaps only He can foresee). But that is not a reason for us to hold back. Sure, if God really wanted the evil not to occur, He could stop it miraculously, with no need for us to intervene; but equally well, if God really did want the evil to occur and we did try to intervene, God could miraculously prevent our intervention. So even if it were right to conclude that we shouldn’t interfere because God might have a plan, it would also follow that it is OK to interfere (regardless of said interference being wrong) because if it really were wrong, God would prevent it. It cuts equally well either way.
    But of course this still isn’t really the right way to think about it. The idea is not that God has planned all sorts of great evils, and by trying to stop them, we will be flouting His will. Rather, our actions screw things up, so God has to step in after the fact and fix things up. It is true that through omniscience, God’s planning occurs temporally — or rather, eternally — prior to our action; but God’s bringing good out of evil is logically posterior to our act. God’s plans in this regard are simply not applicable to our moral decision-making, whether we know what those plans are or not.

    January 8, 2011 — 15:59
  • DL

    I have an intuition that given a machine like that, I should leave it alone. For if I act and harm results, then I was a positive cause of the harm, and I don’t want to be that.

    I have a similar intuition, but I conclude that it’s an irrational one and I should try to stop the immediate evil that I do know about. Any potential risk is cancelled out by an equal and opposite risk, so they contribute nothing to my decision. But I do think there is a good reason why our instincts might influence us against doing anything: in real life, we never come up against gedanken-experiment psychopaths (indeed, nor even against movie or comic-book psychos!). We do come across, say, doctors in hospitals, and if you found someone in a hospital strapped to machinery that was causing pain, you do have reasons to believe that stopping the machine might lead to worse results — namely, that hospital doctors are trying to help people, even if that sometimes involves causing them pain. So we acquire a very sensible and useful habit of not meddling in affairs that we do not understand.

    January 8, 2011 — 16:08
  • Mike Almeida

    It’s also true that if I do press B0 then I might be preventing a small harm but bringing about a much greater evil. But I think that’s true for most every action we consider
    This conflates two sorts of cases. Go to the garage again.
    1. If I light the match, there is next to no chance that the garage explodes (though it is possible that it does).
    2. If I light the match, I’m not sure what the chances are that the garage blows up.
    If (2) is true, I have no idea how you reach the conclusion that you are permitted to light the match. But it is (2) thst ST entails is true.
    Ok, now this claim,
    The proper way to fill out the button pushing analogy would be this: I can push B0 relieving some suffering but, of course, *if* the suffering occurs God very well might have some justifying reason for permitting it that I don’t know about.
    I can’t make this claim coherent on the assumption that it is consequences that matter. Go to the garage. If I think (2) is true, then I’d hardly say “well, of course I can push the button. It produces the small good of allowing me to find the light switch. Of course, were it up to God, he might not light the match in order to prevent an explosion”. That just sounds crazy to me. Maybe I’m missing something. If you think it’s permissible to light the match, then we probably have a basic conflict in intutions.
    But then there is this,
    It’s also true that if I do press B0 then I might be preventing a small harm but bringing about a much greater evil. But I think that’s true for most every action we consider, and (a) it’s not a special problem for ST
    Indeed, it is a special problem for ST. You are conflating cases like (1) with cases like (2). We are not in general in cases like (2). But we are if ST is true.
    (b) there’s a mutual destruction of ignorances here (like I mentioned before).
    I have no idea what this means. What are destructions of ignorances?
    As Bergmann and Rea observe in their response to your article, representativeness for induction is relative to a property.
    Right, I recall that. What this tells me is that the goods/evils I don’t know about are so great that they would justify even God in permitting evils. Consider all the billions of evils God allows for which there is no imaginable good. Surely, if these unknown goods would justify God in permitting all of these evils, then they would justify me, wouldn’t you say? And if these great goods follow from God’s omitting to prevent an evil E, then surely they would follow from my omitting to do so as well. God’s omission is not special in this way. And if E is the kind of evil that God seems to permit all of the time, then I have really good evidence that E is the kind of evil for which there are these really great goods (unless we’re also going skeptical on induction). So I could have very good reason to believe that preventing E would prevent an extremely great good. I really don’t want to say anything as strong as that, but I certainly could. And, as far as I can see, I can’t be charged with deliberating badly.

    January 8, 2011 — 16:12
  • DL

    These suggestions/hints of that position are just painfully easy to counterexample.

    But the counterexamples have to be right. You say, “C1. If I take skeptical theism seriously, I will not push B0” as a reductio that you should not take ST seriously. But if you take ST seriously you should push B0. ST is really irrelevant to pushing B0 (because it is not a factor in our moral duties and because it cancels itself out).

    C1′. Even if I take skeptical theism seriously, I have better reason to push B0 than not to.
    C2′. I have better reason to relieve suffering than I have to take skeptical theism seriously.

    No; from P1) Even if I take Pythagoras’s theorem seriously, I have better reason to push B0 than not to; it does not follow that: P2) I have better reason to relieve suffering than I have to take Pythagoras’s theorem seriously. They are separate issues, and in fact I have good reasons to take both of them seriously.
    C1”. Even if I don’t take ST seriously, I have better reason to push B0 than not.
    From C1′ and C1”, it follows that you don’t have reason for or against ST… given B0. Which is correct, B0 (“you should push the button”) is true either way. That is no reason to doubt ST, which instead follows from G1 (God would not allow pointless suffering). I guess it depends on how you mean “take seriously”: If you mean “ST is not true”, then that doesn’t follow. If you mean “ST should not be taken as a meaningful factor in my moral deliberations”, then yes, that’s the point.

    January 8, 2011 — 17:47
  • DL

    1. If I light the match, there is next to no chance that the garage explodes (though it is possible that it does).
    2. If I light the match, I’m not sure what the chances are that the garage blows up.
    But it is (2) thst ST entails is true. […] We are not in general in cases like (2). But we are if ST is true.

    You’re ignoring half the story. If ST gets us cases like (2), then along with that comes:
    (2′) If I don’t light the match, I’m not sure what the chances are that the garage blows up.
    If (2) means it’s impermissible to light the match, then (2′) means it’s obligatory to light the match. Since that’s a contradiction, then neither conclusion can follow. We’re left totally in the dark (so to speak). If we are to decide whether to light the match or not, then we have to flip a coin or else bring in some other premises. ST can’t get us anywhere one way or the other. (That’s what Ted means by “destruction of ignorances”. The equal and opposite skepticisms are useless in moral deliberation.)

    Surely, if these unknown goods would justify God in permitting all of these evils, then they would justify me, wouldn’t you say?

    I would never say that. I may drive my car whenever and wherever I want. You may not drive my car whenever or wherever you want. And I’m not even God. In fact, I wouldn’t say God has any moral responsibilities at all (though being loving, He freely chooses to provide benevolently for all creation), so even in cases where God would act the same way that morality dictates I should act, it would not be for the same reasons.
    But let’s suppose there is some situation where God and I are in similar positions regarding some impending evil. The possible choices are to stop the evil, or let it to proceed and then undo it, or let it proceed and draw some (greater?) good out of it. Though the same categories of choice may apply to God and to me, God has far more options in how to implement his choice. I can try to stop the defective anvil from falling on your head; God can do that, or let it start falling and then miraculously annihilate it in mid-air, or let it crush you and then bring you back to life, or let it crush you and give you a greater reward in heaven to compensate. The fact that we don’t (immediately and obviously) see how God responds to some particular evil merely suggests that He is responding in some way that we could not.
    And that’s not all: consider my point above that God’s providence does not plan prior to or independently of our actions. Suppose that either I save you from the anvil and we become friends, or else I let you die. Even though saving you was the moral thing for me to have done, we can not conclude that God should have saved you also. For if He had, we would have still gone on to become friends, but by befriending an evil person like me who lets people die from defective anvils, you would become corrupted and wicked as well. In other words, we cannot expect God to act the same as I should act, because God is facing a different situation: one that includes my very decision.
    Therefore, God’s omission (or perceived omission, or lack thereof) is indeed special. It is in fact radically different from mine. It is different in its antecedent moral obligations. It is different in the situation that each of us faces. It is different in the powers each of us can bring to bear. It is different in the understanding each of us has. And each of these differences is in fact infinite.

    January 8, 2011 — 19:04
  • John Alexander

    Mike
    “3. What should I do?
    Ans: What you should do depends on what you can reasonably anticipate to be the consequences of your action. But you simply cannot reasonably anticipate that the consequences will be good or that they will be terrible.”
    I can anticipate that the consequences will be good or evil, I cannot anticipate which they will be. I can reasonably anticipate that if I save the baby or push Bo that some good will occur; the baby’s life will be saved. I do not know if the baby will grow up to be a mass murderer or a saint. It seems that we should not worry about remoter effects if they cannot be reasonably predicted to occur because which outcome will occur cannot reasonably be anticipated when I decide to save the baby which I know is a good.
    Back to unimaginable goods and evils for a second. What do you think an ST would say to the idea that it is possible that we have misidentified some goods as evils and some evils as goods. Wouldn’t the ST have to be committed to the view that we cannot be sure that this has not happened? It seems consistent with ST1-ST3. What I find interesting here, if I understand the position correctly, is that the ST cannot argue for a characteristic common to all goods and another characteristic common to all evil. If they did then they could not assert that ST1-ST3 possess any serious problem.

    January 8, 2011 — 21:50
  • David Warwick

    “Rather, our actions screw things up, so God has to step in after the fact and fix things up.”
    Could you give, say, three worked examples of where that happened? Where we see humans screwing something up and God stepping in after the fact to fix it up? As we’re living and making our decisions now, if you could make it an example from something that happened recently, call that the last twenty years, that would be great.

    January 9, 2011 — 8:01
  • Mike Almeida

    DL, you seem under the assumption that my post was directed to your comments. They aren’t. I’m responding to T. Poston.

    January 9, 2011 — 8:58
  • David Warwick

    “I can anticipate that the consequences will be good or evil, I cannot anticipate which they will be.”
    OK … here’s the problem I have.
    If you saw a baby boy drowning in a pond, knowing you and only you could save him, and that you could do so quickly and at no risk to yourself, do you really stop yourself and think ‘you know, this kid could grow up to be Hitler. I wonder what God would have me do. If I do nothing here, I’ve done nothing wrong’?
    And then the supplementary question – if you don’t, should you / do you think God would want you to?
    You can’t know that the baby will grow up to be Hitler. You will not be able to uncover new information to allow you to determine that in the time available.

    January 9, 2011 — 9:51
  • John Alexander

    David
    I would not ask myself those questions (at least seriously ask them). I do not think they are relevant questions to ask given our epistemic practices regarding our reasoning regarding moral issues. Furthermore, if the baby grew up to be another Hitler I would be saddened by the fact that another Hitler came to be, but I would not think that I am blameworthy for him becoming so simply because I saved him as a baby, any more then I can be praised if he becomes a saint.

    January 9, 2011 — 10:34
  • Ted Poston

    Mike, I think we’ve made some good progress here. So you write:
    “Surely, if these unknown goods would justify God in permitting all of these evils, then they would justify me, wouldn’t you say? And if these great goods follow from God’s omitting to prevent an evil E, then surely they would follow from my omitting to do so as well. God’s omission is not special in this way. And if E is the kind of evil that God seems to permit all of the time, then I have really good evidence that E is the kind of evil for which there are these really great goods (unless we’re also going skeptical on induction). So I could have very good reason to believe that preventing E would prevent an extremely great good. I really don’t want to say anything as strong as that, but I certainly could. And, as far as I can see, I can’t be charged with deliberating badly.”
    My opening analogy was intended to challenge the assumption that permissions are not agent relative. As I see things: if some great unknown good justifies God in permitting inscrutable evils then this is something that doesn’t apply to my moral deliberation. (Though I can see why someone might think this). Suppose there’s a Laplacean deity that’s a pure utility maximizer. We know this being exists and we know that this being refrains from prevent a certain type of easily preventable evil. I can see how someone can get themselves in the frame of mind of thinking that: well, if the Laplacean deity isn’t stopping this, the utility of not preventing must outweigh the utility of preventing. So I shouldn’t prevent this either. I think that’s a non-sequitar though because the utility might set up this way: the utility for *the Laplacean deity* of not preventing outweighs the utility of *his* preventing. That doesn’t give us a reason not to prevent, even in this artificial setup. Now I think the ST’s position is even better here. A perfect being might very well have a different moral deliberative framework than a Laplacean deity: we just don’t know. Perhaps, what we disagree about is the specialness of moral deliberative settings. I took the opening analogy to suggest that.
    BTW–small point–‘the mutual destruction of ignorances’ argument is an application of an argument Hume makes in the Treatise. Hume argues that reason destroys itself because for every pro consideration in favor of reason, one can describe a con consideration. So the list of pros and cons are equal in number and thus cancel each other out. I’ve basically applied that strategy to argue that our ignorances shouldn’t enter into moral deliberation.

    January 10, 2011 — 8:47
  • David Warwick

    “I do not think they are relevant questions to ask given our epistemic practices regarding our reasoning regarding moral issues.”
    No.
    I don’t think they are, I don’t think they should be. I’m not even sure they could be, not without complete moral paralysis.
    I think there’s a real danger when we discuss this sort of thing that we disappear off into a parallel universe. I don’t think, when people make the decision, they second guess God as part of that process. Afterwards, they may wonder how it fits into their particular world view, I suppose, but any discussion of what God wants us to do, I think, has to account for the fact that all God can do in this instance is get in the way of the right decision.

    January 10, 2011 — 12:48
  • Mike Almeida

    I think that’s a non-sequitar though because the utility might set up this way: the utility for *the Laplacean deity* of not preventing outweighs the utility of *his* preventing. That doesn’t give us a reason not to prevent, even in this artificial setup
    What is this relativity of utility? Utility is associated with actions, not with agents. A child is drowning. God lets it happen. In the closest world to ours (assume it’s very close) I perform the action that God did not perform. The utility of my action just is the utility of the action God failed to perform. Do actions have different consequences when God does them? Are there agent-relative values? If so, how would they help? Surely when I have to decide whether to prevent some evil–and thereby interrupt God’s fulfillment of his agent-relative obligation (surely there are such cases)–I should defer to God. In any case, I’m permitted to let God fulfill his obligation, to let the child drown. I perhaps don’t have an obligation to do so, in cases of moral conflict, but surely I’m permitted to do so. And surely there are such cases of moral conflict once we concede agent-relative values (it’s agent relativity that generates conflict, esp. PD’s). But then we have the same problem, without moral paralysis.

    January 10, 2011 — 17:17
  • David Warwick

    “Do actions have different consequences when God does them?”
    Well … yes. There would be at least slightly different consequences if Person A saved the baby or Person B did. If we all understood God to have saved the baby, there would be consequences, for the conclusion of Richard Dawkins’ next book, if nothing else.
    Does it have different costs? Well, God presumably wouldn’t save the baby in the same way a person would.
    I think you’re asking ‘do they have the same “moral value”‘? The life of the baby is presumably ‘worth’ as much, either way.
    The key question, I think is this: given that a human could easily save this baby (who we’ve left splashing in this pond for days now), we conclude it would be wrong for us not to; given that God could easily save the baby – is it *as wrong* of God not to?
    Instinctively: it’s *at least* as wrong for God not to.
    Morality can’t work like carbon offsetting. If I saved three babies that morning, I can’t pass this baby and go ‘hey, I’m still in credit, if I let another two babies die, I’ll still be breaking even’. Likewise, all the other goods God is [meant to be] doing don’t somehow give him leave to let a baby die, if he can prevent it.
    Unless a baby’s life is worth less to God, in either absolute or relative terms, than to us. Again, this is … rather the opposite of what modern religions tend to tell us.
    OK, thought experiment:
    There are two ponds, both prone to having babies fall in them, where they would quickly drown without help.
    Pond 1 is in a world where the status of God’s existence is unknown or unknowable, a universe that might be atheistic. This pond has an automated system installed that scoops out any drowning babies it detects. It is 99% reliable, and its processor is no more sophisticated than a smoke detector’s. It doesn’t ‘understand’ that it’s rescuing babies, any more than a toaster ‘understands’ it’s toasting bread.
    Pond 2 lacks this machine, but is in a world where God definitely exists and is omnipotent, omniscient and omnibenevolent and has a special place in His heart for babies. A world where all people pray, and everyone is devout and has welcomed the transformative effect of the one true God’s word into their lives, and all babies are quickly baptized.
    Accidents happen. Given that you’d like the baby to live, would you rather a baby fell into Pond 1 or Pond 2?
    The answer is ‘Pond 1, obviously’. How reliable does the machine have to be before you change that decision? 50%? 10%? 0.01%?
    Does God save more babies from Pond 1 or Pond 2? If so, are these numbers reflective of human attitudes to him on the different worlds?
    If the machine saves 99 out of 100 babies, it’s 99% reliable. Does the tally of saved babies from Pond 2 equate to God’s reliability?
    If the machine is ‘more reliable’ than God, is the machine more ‘good’ than God?

    January 10, 2011 — 19:22
  • Mike Almeida

    Ted,
    Tell me whether I’m tracking your strategy here. What you want to do is distinguish the obligations of God to prevent evil from the obligations of the rest of us to prevent evil. You want to do that in such a way that, possibly, it is permissible for God to allow evil E in circumstances C and it is not permissible for the rest of us to allow evil E in C.
    Now since we’re in the same circumstances, C, we can expect the same consequences for our actions. So, to keep this consistent, we have to appeal to some agent-relative view of value. I confess to having no clear idea about what ‘values-for’ might be, if they’re are also objective values. But let’s suppose you can make all of this coherent.
    A finite agent S in circumstances C now has to deliberate about whether to prevent evil E or not. Let C be a situation in which a child is drowning, and if S does not save the child, no one else will. He’s the only person present. S now reasons as follows. If the child drowns, then there must be some great good-for God that results. After all, God would not let that happen without some great value-for. All good, I’m assuming, so far.
    S recognizes that he’s in a moral conflict. Either S fulfills his moral obligation or God fulfills his, but not both. This is a familiar consequence of agent-relativity. Does S’s obligation ‘outweigh’ God’s or does God’s outweigh S’s? Even if we assume value incommensurablity (and now the number of moral assumptions needed to rescue the skeptical theist is growing significantly) there will still be lots of cases where value comparison is warranted. Here is an implausible solution: everytime S is in a position to prevent an evil that otherwise would not be prevented, the moral conflict resolves is just the right way: S’s obligation is what should be discharged, not God’s. That’s a bit hard to believe.
    The point then is that, even under the assumption of agent-relative obligations, there will be cases of moral conflict that result in either (i) moral paralysis or (ii) moral dilemmas. Consider (i): I do not know whose obligation is more stringent, God’s or mine. This is an important point of deliberation even under the assumption of agent-relativity. Consider typical cases of interpersonal moral conflict between, say, Smith’s obligation to aid his family member with medicine M and Jones’s obligation to aid his family member with M. It’s really hard to know what to do. Moral paralysis results; I don’t know how to resolve this conflict. Consider (ii): There is no answer to the question of whose moral obligation is more stringent. God has his obligation and I have mine. But, then, there will be cases in which, if I fulfill my moral obligation, then God cannot fulfil his (impossible!, God cannot violate an obligation). So, we cannot have an irresolvable conflict of obligations.

    January 11, 2011 — 8:45
  • Ted Poston

    That’s close. I’m not exactly sure what you mean by ‘in the same circumstances C’ and I’m not too confident about ‘agent-relative values for’. I don’t want to violate the supervenience of the normative on the non-normative. God’s moral deliberative setting is, for all we know, different from ours. I want to say that in the drowning case, we’re obligated to save the child even though God may be under no such obligation. Part of the subtlety here is that I’m doubtful that there’s one act that both God and us can perform: viz., save the child. I’m inclined to think that we should individuate the acts differently: there’s the act of my saving the child and the act of God saving the child. One of the thoughts in the background is that the utility of *God* saving the child might very well differ from the utility of *my* saving the child. (Perhaps, though this difference is better captured using expected utility given difference in moral deliberative setting–I’ll have to think about the best way to capture the thought I’m after). Here’s another assumption in the background: I can’t bring it about that God fulfills his obligations. So when I’m in the case of moral conflict you give above, I don’t think this is a genuine conflict. I’m obligated to save the child. If I fail and the child dies, then, there might be some great good that justifies God in permitting the drowning. But, supposing, there is that great good doesn’t permit me in doing nothing. It’s a great good that concerns God’s obligations and permissions, not mine.

    January 11, 2011 — 9:47
  • David Warwick

    “It’s a great good that concerns God’s obligations and permissions, not mine.”
    Yet not so great that a man passing a duckpond can’t subvert it?
    That man is clearly ‘permitted’ to do nothing. Yet God is ‘obligated’ not to?
    So … God lacks the options a man has, and, even if he didn’t, lacks the man’s ability to save the child? And, furthermore, would be obligated not to, even if he could?

    January 11, 2011 — 11:44
  • Mike Almeida

    God’s moral deliberative setting is, for all we know, different from ours. I want to say that in the drowning case, we’re obligated to save the child even though God may be under no such obligation
    This is the line that our intuitions about commonsense morality do not apply to God. I’m flatly unwilling to abandon commonsense moral intuitions about what (we and) God should and should not prevent in order to manage the evidential argument from evil. It’s a drastic move, it seems to me. But I agree that, if you’re willing to go this route, you’ll avoid (though in what I think in an extremely top-heavy way) the evidential problem.

    January 12, 2011 — 9:10
  • Mike Almeida

    I’m in the case of moral conflict you give above, I don’t think this is a genuine conflict. I’m obligated to save the child. If I fail and the child dies, then, there might be some great good that justifies God in permitting the drowning. But, supposing, there is that great good doesn’t permit me in doing nothing. It’s a great good that concerns God’s obligations and permissions, not mine.
    FWIW, my point was based on your suggestion that morality might be agent-relative in some way. The point is that agent-relativity does not eliminate moral conflict. It does not eliminate the relevance of what you’re required to do from my deliberation concerning what I ought to do. So it won’t eliminate the relevance of what God ought to do from what I ought to do. It would be remarkable if my prevention of some evil or other did not, on some occasion or other, prevent a far more serious good. It would be remarkable if God’s obligations were not, on such an occasion, far more stringent than mine. This is something that I’d need to consider in my deliberations even on the assumption of the agent relativity of obligations. But in this regard, you might look at F. Feldman’s ‘World Utilitarianism’ (for the full story, see his _Doing the Best We Can_) which admits of such interpersonal conflicts within a broadly utilitarian theory.

    January 12, 2011 — 11:12
  • Ted couldn’t you illustrate the point this way, The fact that Hare’s archangel recognises that some action X should be permitted because of the greater good it brings about, does not mean the archangel Hare’s prole Hare’s has a duty to prevent X.

    January 15, 2011 — 5:34