Summary of Swinburne’s _Existence of God_ in 1000 words!
June 17, 2011 — 17:19

Author: Trent Dougherty  Category: Existence of God  Tags: , , ,   Comments: 8

Here at the University of Saint Thomas Summer Seminar, (what a beautiful campus!), we’ve just completed our first week, the topic of which was the Fine Tuning argument for God’s existence. There were a lot of great presentations and comments pro and con, but I find myself mostly a Swinburne guy here. So I wrote a note to my colleagues here giving a bare-bones summary of his perspective. It is below the fold as a basis of further discussion or just for the record.

I * literally* can’t help reflecting back upon this first week of reflection on the fine-tuning argument. (Seriously, I tried not to, and I seem to be failing, since I’m still typing.)
There were a few things I thought bore repeating, even though they were either mentioned by me or someone else during the week. They are not original thoughts, but, rather, the ideas Swinburne has been defending for many years.
You can think of the FTA as either an Inference to the Best Explanation (IBE) or in Bayesian terms (with the prior probability being determined by simplicity considerations and the liklihood (E/T) being determined by explanatory power).
1. The Simplicity (Prior Probability) of Theism
Theism postulates, in its basic ontology, one entity with two properties (maybe one–“intentional power”–maybe three): power and knowledge, held in the simplest way (without limit (apart from logic)).
This “bare” theism, of course, and is ontologically sparse, as you can see.
It might be strange that there exists anything at all, but it shouldn’t be terribly strange that something exists. The only state of existence simpler would be one entity with one property (or maybe two) also held in the simplest way (with zero limit). You might think being a knower entails a few independent properties. I doubt that, but suppose it does. We are still talking about a single entity with a small cluster of properties (and it might be that the property of personhood is a single natural property with knowledge, power, and what you think those entail as *facets* of one natural property. So the prior might be low on any existence, but, on the idea that we assign priors on the basis of sparse ontology–which Swinburne argues from cases is what good science does–theisms prior shouldn’t be terribly low. It is bad practice to assign it low probability because it seems vaguely “weird.” Philosophers should have made peace with the weird long ago. 🙂
2. The Explanatory Power (Likelihood) of Theism
Being all-knowing, God will know what states of affairs are good (more generally, he’ll know the value of all states of affairs) ASSUMPTION: Some states of affairs are objectively better than others (at least along some dimensions of evaluation).
Being all powerful, nothing will deter him from the intrinsically motivating power of the good. (ASSUMPTION: A version of ethical “internalism” according to which for a person to see something as good is to give that person a pro tanto desire for it/pro tanto reason to seek it.)
***So a state of affairs has an expectation on theism to the extent that we see value in it. There are many goods and kinds of goods and they can’t all be realized at once, so it would be hard to anticipate in advance which goods God would bring about (though this provides God some reason to bring about all kinds of goods at some point (or always): theistic multiverse).
So to the extent we find value in a state of affairs (or in a state which it entails or makes probable) it is relatively unsurprising given theism. CAVEAT: What about evil? Virtue is perhaps the most valuable thing imaginable, and some of the greatest virtues: forgiveness, empathy, magnanimity, humility, courage, moderation, etc. logically entail some kind of suffering. Heroic virtue entails tragic suffering.
The universe we see with it’s beauty and ugliness, it’s tragic suffering and heroic virtue is a good (kind of) universe. So it is relatively unsurprising that such a universe exists, given theism.
3. The Complexity of Naturalism
A. Single Universe Naturalism (SNU)
SNU postulates, in its basic ontology, a very, very large but finite number of properties of a finite number of kinds, with properties of highly specific values (vary the language all you want, some of them are going to be highly specific)
B. Mult-Universe Naturalism (MUN)
MUN postulates, in its basic ontology…SNU^n stuff! A universe isn’t one thing, it’s many things taken together. A multi-verse isn’t one thing. It’s many things taken together. ASSUMPTION: The prior/intrinsic probability of a hypothesis should drop for every new entity and kind of entity postulated (in the *basic* ontology) and for every finite parameter (or for every property and property exemplificaiton).
4. The Explanatory Power of Naturalism
E/SNU = lower than we can really grasp
(i) MUNs actually proposed by scientists
E/MUN = varies from very low to not too low
(ii) Really robust (plenetudenous) MUN’s
E/MUN = 1
Which theory is better off depends on two things: 1. The ratio PrINTRINSIC(T):PrINTRINSIC(N), and the ration Pr(E/T):Pr(E/N)
Above, it was argued that BOTH are favorable to theism. I.e. that it predicts the data better, and that it has a higher intrinsic probability (do to much, much more sparse basic ontology).
Still, the liklihood ratios, IF the fine-tuning estimates are right–are going to favor theism (T) over SUN by such a large factor, that even a ridiculously uncharitable prior to theism is going to be utterly swamped. The differences are so large by some estimates, that if you gave theism a prior of 1 in 10^17 (1 over the number of estimated stars in the universe) theism would come out with a probability well over .99 (playing around in a spreadsheet with this, Ted Poston and I had a hard time believing how confirmatory this argument is, it was really surprising, though it shouldn’t have been).
MUN is not that much different. Even if you think the likelihood value is greater by three or four orders of magnitude, I think it will be weaker by many more orders of magnitude than that in the prior given that the fundamental ontology includes an actual infinite multitude of kinds and (continuum many) tokens of properties (which I don’t even think is logically possible, but that’s another argument, as is the issue that these entities are all clearly contingent beings).
This is only the barest summary of Swinburne’s _Existence of God_ off the top of my head and un-proof-read (charity!), but I thought it bore writing out in one spot in only about 1000 words.
I’ve posted this at Prosblogion for further discussion or just for the record.
Trent Dougherty
(PhD Rochester)
Department of Philosophy
Baylor University

  • Dustin Crummett

    Still, the liklihood ratios, IF the fine-tuning estimates are right–are going to favor theism (T) over SUN by such a large factor, that even a ridiculously uncharitable prior to theism is going to be utterly swamped. The differences are so large by some estimates, that if you gave theism a prior of 1 in 10^17 (1 over the number of estimated stars in the universe) theism would come out with a probability well over .99
    What odds are we assigning here to God creating a universe broadly like ours?

    June 20, 2011 — 21:22
  • Jeremy Gwiazda

    The probability of a universe broadly like ours given that God exists, P(e|h&k), is 1/2. Richard Swinburne was also good enough to provide some specific values that, he thinks, could make his argument work at the end of his article ‘Gwiazda on the Bayesian Argument for God’, which may be of interest. Available, I think, here:
    One of the overall concerns I have with this sort of argument is that it does seem that the argument has so much wiggle room as to simply become a mirror that reflects back the assumptions of those investigating the argument. Let’s imagine Joe, who thinks the world the worst of all possible worlds. Well, Joe says, sure, this is a great argument. One personal agent created the world, and this agent was infinite on one property. Evil. What about good? To this creator, evil is the most valuable thing imaginable, and some of the greatest evils: murder, rape, genocide, torture, insanity, neglect… logically entail some kind of joy and love. Heroic evil entails beauty and love. (Bashing an infant to death against a radiator gains evil when it is done by a parent who is supposed to, and perhaps has or does in some sense, love that infant.) Whether or not Joe has an effective reply to ‘the problem of good’, who knows, but is it clearly worse than replies to the problem of evil?
    One can reply that people are properly ranked on power, knowledge, and freedom [not evil]. Or, pure-limitless-intentional-power. And these properties (or property) should be with zero limit. Or something along these lines, which happened here ‘How the divine properties fit together: reply to Gwiazda’ available, I think, here:
    One of my major concerns remains the following. Who is actually prepared to fill in values of the Bayesian argument for, for example, the crucial values involving God’s existence versus, as one example, the existence of two gods of large finite power (one good, one evil) PRIOR to playing around with the numbers to see how the values turn out? And if the answer is no one, then how much force do these Bayesian arguments really have? Anyone will simply come along, play around with the values, and plug in the values (and subsequent arguments to these values) that he likes.
    [As a quick test, here’s one question. The claim is that zero limit is more likely than limited in, e.g., power. So, what are the units. Well, perhaps a being can be some percent of power, ranging from 0% to 100% powerful. Then how much more likely is the prior probability of a being 100% powerful versus a being who could range from 20% to 30% of power? Is anyone going to (does anyone) have a sensible answer to this question prior to hacking around a bit in the theistic context?]

    June 21, 2011 — 21:30
  • Jeremy, thanks for the link to Richard’s reply to you. The exchange is illuminating.
    1. Re: reflecting values. Of course that’s *possible* but I don’t think it’s *probable*. It’s kind of like a skeptical scenario. The question is do YOU(indexed to reader) think the existence of a world where rational moral agents exist and–to various degrees–exemplify moral and intellectual virtues? If you do, then the existence of such is unsurprising given theism, because the hypothesis is that an omniscient being knows what’s good WHATEVER THAT IS.
    2. Re: priors. Assigning numbers can be illuminating for establishing relationships: vary one up and down, and see how much it affects the others, but it’s the RATIOs that are all we need. We can do an assessment via qualitative probabilities and get as precise an answer as we need. No matter how I slice it, I get theism many times more probable than naturalism, many orders of magnitude even. But even if it was, say 4:1 I’d be 80% confident.

    June 22, 2011 — 11:28
  • Jeremy Gwiazda

    Thank you for the post Trent – I found it interesting and am enjoying thinking about Swinburne’s argument again. Yeah, the main thing of interest about the exchange, I think, is the numbers Swinburne was kind enough to provide for his argument at the end of that first link. Those, I find, are very helpful for getting a sense of his argument.
    Re your 1st point: I don’t know that I entirely follow. Or, was the first sentence cut short? It seems that Joe has a good response here. ‘The question is do YOU(indexed to reader) think the existence of a world where irrational immoral agents exist and–to various degrees–exemplify immoral and inane virtues? If you do, then the existence of such is unsurprising given Joe-evilbeing-theism [is there a word for this? Must be…], because the hypothesis is that an omniscient being knows what’s evil WHATEVER THAT IS.’ Perhaps more seriously, I think that evil becomes far easier to explain on a hypothesis of limited power, or two warring beings, etc. Then it becomes absolutely crucial how much more likely one being is versus many, or, more worrisome for me, how much more likely a being of zero limitations is versus a being of some limitation. This last point I turn to with the priors.
    Re 2nd point (priors): How are we to understand the possible range of power for agents, and its effect on priors? One way that I suggested is that an agent can have 0 to 100% power. If this is not the way to go, what is? Then, given that theism is a being of 100% power (or, zero limitations), how much more likely is this than a being, of, say 20-30% power? I have no idea how to answer this question. If we simply throw a Lebesque measure at the thing to get relative ratios, then the being from 0.2-0.3 has a 1/10 shot at it, and a being at 1 (100%) has no chance, so theism isn’t doing so well. (In this example, we can say things like, whatever the actual values, a being from 0.2 to 0.3 power is twice as likely to exist (prior-wise) as a being from 0.51 to 0.56.) Now this may all be hand-wavy madness. But that is precisely my problem and question. How exactly am I to understand the claim that zero limitations is simplest, and then how do I translate this simplicity to priors?
    The above is largely about pinning down theism vs other explanations involving many or finite gods. But it seems, why not just go the route of plenitude? Every possible universe exists in a big old multiverse. Very simple. Zero limitations. I’m drawing a bit of a blank: What’s the standard reply to this move?

    June 23, 2011 — 13:46
  • Jeremy, thanks for your kind words.
    1. I’m an ethical internalist in the sense that the vision of the good is intrinsically motivating to rational agents and if it is not desired it is only because of akrasia or interference. But the latter is impossible because of omnipotence.
    2. I don’t know how to make more intuitive to you that zero limitation is more simple. All I could do would be to point you to cases like Richard does where it is the natural assumption of scientists.
    3. I have independent objections to the plenum: it entails an actual infinite multitude/magnitude and it is a contingent being. But it doesn’t strike me as right to say it is of 0 limitation in the right way. Power and knowledge are properties with measures whereas the infinity of the plenum postulates entities. Power and knowledge, as properties, only entail the entity which exemplifies them.

    June 23, 2011 — 14:43
  • Jeremy Gwiazda

    Interesting first and last points. Regarding the middle point, 2, well, might as well keep the links rolling!
    is where I critique Swinburne’s claim that the endpoints (one endpoint being zero limitation) are simplest. I see many problems here, just one of which is: How do we get from scientists’ preference to simplicity? Let’s grant that scientists prefer zero limitation (though I don’t think that this is true). Who cares? People also are better at remembering the first and last items in a list compared to the middle items. This can’t show that the items at the endpoints are simpler (or, are anything). Similarly, scientists’ preferences for endpoints could simply be psychological.
    It does not appear as though this preference for zero limitation has arrived at greater truth (or, if it has, some serious argument is needed to this conclusion, one that involves a serious look at science and the history of science). But the speed of sound and light have large finite limits, most particles have small but >0 masses, etc. It’s then not clear to me, even if scientists have preferences for zero limitation (which again, I think the evidence is mixed on) how this shows simplicity. Why doesn’t it just show preference?
    The other thing to perhaps throw into the mix is to mention that even if zero limitation is simple, and we consider power, knowledge, and goodness, it is not remotely clear how to take these 3 properties with ‘zero limitation’. Which I suppose is just to say that the literature going back a few thousand years has a number of proposals. I found an interesting recent proposal Yujin Nagasawa’s ‘A New Defense of Anselmian Theism’. (Though I suppose here it is important to remember that the specific goal is to defend Anselmian Theism.)

    June 24, 2011 — 12:20
  • 1. An intuitive way of measuring the simplicity of a system is to look at the length of the shortest sentence exactly describing the system in a language whose names and predicates are non-gerrymandered (they express “natural” entities and properties in the Lewisian sense, not in the “naturalism” sense). The shorter that length, the simpler the system.
    Zero limitation systems tend to be simpler than systems with particular limitations in this sense. A being with unlimited knowledge is a being x such that: (p)(x knows p if p and x believes p only if p). This is a pretty brief description of the being’s knowledge. But take a being y with limited knowledge, say one that knows all mathematics and has no beliefs outside mathematics. We describe its knowledge as: (p)(x knows p if (p and about(p,mathematics)) and x believes p only if (p and about(p,mathematics))). More complex.
    2. Likewise, endpoints tend to be simpler using the brevity criterion. If I have an interval J, I can define its left and right endpoints:
    left(J) = min { x : x is greater than or equal to every point y such that y is less than every point of J }
    right(J) = max { x : x is less than or equal to every point y such that y is greater than every point of J }.
    (Or something like that–it’s easy to slip up. Of course what I’m trying to say is: left(J) = inf J, and right(J) = sup J.) These aren’t that simple, but notice that they involve no arithmetical relations other than comparisons.
    But try to define the midpoint of J. It’s a lot harder. Two obvious definitions and one slight less so:
    mid(J) = the x such that distance(x,left(J)) = distance(x,right(J))
    mid(J) = (left(J)+right(J))/2.
    mid(J) = the x such that right({ dist(x,y) : y in J }) is minimal.
    The first two are more than twice as complicated as the definition of left(J), since they each include left(J) and right(J) plus they involve arithmetical or metric relations.
    And other points, like the one-third point, are only going to get more complicated, I think.
    3. “Every possible universe exists in a big old multiverse. Very simple. Zero limitations.” Like Trent, I have independent objections. For instance, probability theory seems to break down within this grand multiverse. Every coarse-grained local situation occurs infinitely often, with an infinity beyond cardinality. There will be no good ways of comparing probabilities. Suppose we overcome this, maybe with some intuitive notion of counting that goes beyond cardinalities. Then there is the intuition that surely there are “more” universes that are orderly only in the immediate vicinity of minds and disorderly everywhere else than ones that orderly on a wider scale. But we observe ourselves as in one of the latter universes, which is surprising.
    Also, there is the question of how many copies of each universe there are in the multiverse. Either one or infinity. If infinity, how big an infinity? Any particular infinity will limit things. So, suppose one. Then there is only one copy of each universe. OK, then you have ethical problems. Nothing we do can affect the overall value distribution. Moreover, we get the absurdity that we have moral reason to cause self-harm. For causing self-harm doesn’t affect the overall value distribution, but it ensures that the harm happens to us, rather than to a counterpart in another world. (See the “Identity versus Counterpart Theory” section here.)

    June 29, 2011 — 10:02
  • Jeremy Gwiazda

    In terms of the first 2 points, it is still not clear to me how to go from simplicity to prior probability. In particular, and thinking in terms of basic measures on intervals, it’s odd to assign anything other than 0 probability to a point. But presumably here we have to.
    Is the idea something like the following? The endpoints each have a 10% prior probability, and every other point has 0 prior probability (that is, we split off the endpoints, give those 2 points a positive prior probability, and then for the other points we define probability on intervals).
    Then, is the claim that this move is good for scientific practice? (Where ‘this move’ is giving a huge probabilistic preference to the endpoints.) If so, what is the argument? In particular, if we found a new particle tomorrow that went so fast that it either travelled instantaneously, or at a very large finite speed, I would bet on the latter. And if the counter is that that is because of background knowledge, I’d suggest that the background knowledge suggests that the endpoints should not be given preference. (If we generally find finite values which gives rise to the background knowledge, why should 0 and the infinite, or the endpoints, be given probabilistic preference?)
    Re: 3 — good points and questions.

    June 30, 2011 — 11:06