Lange on the Natural Necessity of Something
March 1, 2014 — 19:01

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , ,   Comments: 0

Marc Lange’s contribution to The Puzzle of Existence, begins with this remark:

I read recently about a baby who was trapped during the night of February 26, 2011, in a locked bank vault in Conyers, Georgia. Naturally, I wondered why that had happened (235).

In the article which follows this fantastic opening, Lange appeals to the theory of necessity and laws of nature from his 2009 book, Laws and Lawmakers, to argue that one can explain why there is something rather than nothing only by showing that something exists as a matter of natural necessity (or, in a qualification he makes at 246n11, showing that it is naturally necessary that something has a nonzero probability of existing). Lange begins, therefore, with a destructive line of argument, designed to show that the only candidate answers to the question why there is something rather than nothing are non-causal scientific explanations, then proceeds with the constructive project of showing how, on his theory, such an explanation can be given. It is, I think, to Lange’s credit that the constructive portion of his essay is stronger than the destructive portion; the reverse is (and always has been) more often the case in philosophy.

Lange’s destructive argument can be reconstructed as follows:

  1. Every candidate answer to the question, ‘why is there something rather than nothing?’, must be a scientific explanation (238).
  2. Scientific explanations obey the distinctness principle (236-237).
  3. Any causal explanation of why there is something rather than nothing would violate the distinctness principle (239-240).
  4. Therefore,

  5. Every candidate answer to the question, ‘why is there something rather than nothing?’, must be a non-causal scientific explanation.

Every premise of this argument is false.

To Lange’s credit, he does recognize that premise 1 is a substantive premise – that is, that not all (good) answers to ‘why?’ questions are scientific explanations. Nevertheless, all he says in defense of premise 1 is this:

I have taken for granted that in asking why there is something rather than nothing, we are demanding a scientific explanation. If an answer to this question does not have to satisfy the usual criteria of adequacy for a scientific explanation … then I do not know what it must do. Of course, not all explanations are scientific explanations; there are explanations in mathematics, moral explanations, legal explanations, and even baseball explanations (e.g., for why a given baserunner is entitled to third base). But none of these kinds of explanations is demanded by the riddle of existence (238).

However, Lange goes on, immediately thereafter, to observe that “Some philosophers who claim to regard the riddle of existence as demanding a scientific explanation may not actually so regard it.” There follows a brief discussion of attempts at axiological explanations (explanations that say that the world exists because it is good for it to exist). Similarly, one might appeal to other kinds of teleological explanations, or the ‘personal explanations’ in which some philosophers believe. Furthermore, at 242n4, Lange discusses David Lewis’s view that the existence of something is metaphysically necessary, and notes that on Lewis’s view of explanation this does not actually explain why there is something rather than nothing. However, Lange rejects Lewis’s view of explanation, and so holds that if Lewis were right about worlds, the existence of something rather than nothing would thereby be explained. Lange seems to think that this would be a scientific explanation, but it sure looks to me like a distinctively metaphysical explanation, different from anything found in natural science. So Lange does not give adequate reason for thinking that answers must take the form of scientific explanations and, indeed, there seems to be reason to suppose just the opposite. (Perhaps, though, an argument could be produced to show that, among the many candidate answers, the scientific explanations are, for whatever reason, more likely to succeed. This kind of argument would not rule the alternative answers out of court as Lange seems to want to do.)

Lange defines the distinctness principle, to which he appeals in premise 2, as follows:

If F suffices (or even helps) to constitute G‘s truth, then F is too close to G to help scientifically explain why G obtains (236).

The explanation of the laws of thermodynamics by statistical mechanics is a counterexample to this principle: the obtaining of the microphysical laws, together with the statistical facts about the microstates, constitute the obtaining of the thermodynamic laws and also explain their obtaining.

It seems plausible to me that Lange’s distinctness principle holds for explanations of particular facts, although not for general facts like special science laws. Thus, for instance, plausibly the position and momentum of the various gas particles in the room does not explain why the air temperature and pressure are as they are. It is unclear, though, on which side of this contrast the fact that there is something rather than nothing belongs.

Premise 3 is false because Lange takes the question to be about “why there exists some contingent thing rather than no such thing” (239). But some necessary thing or things could have caused the existence of contingent things in a non-necessitating manner, such as indeterministic physical causation or libertarian free choice. To cite such a cause would be to give a causal explanation of the existence of something rather than nothing without violating the distinctness principle.

So Lange’s argument that his sort of explanation is the only candidate explanation fails. But, as I said, in this piece Lange does a better job building up than tearing down, so let’s turn to Lange’s positive proposal.

The general idea of Lange’s view is that subjunctive conditionals are to be taken as primitive and the different species of necessity are to be defined in terms of them. Possibility and contingency then get defined in terms of necessity in the usual way, and all naturally (i.e., physically or nomologically) necessary propositions count as laws of nature. What Lange argues is that it may well be the case that it is a law of nature (in his sense) that some particular entity or entities exist, and that if this were the case it would amount to a non-causal scientific explanation of why there is something rather than nothing.

The analysis of necessity in terms of counterfactuals, as it is explained in the essay, goes like this:

Take a set of truths that is “logically closed” (i.e., that includes every logical consequence of its members) and is neither the empty set nor the set of all truths. Call such a set stable exactly when every member p of the set would still have been true had q been the case, for each of the counterfactual suppositions q that is logically consistent with every member of the set. I suggest that p is a natural necessity exactly when p belongs to a “stable” set (245).

As Lange indicates in a footnote, there are some further complications discussed in his book, but the general idea is that for any species of necessity, in order to get a necessarily false consequent on a true counterfactual, you have to start with a necessarily false antecedent. Natural necessity is a species of necessity which is weaker than logical necessity (hence the logical consistency requirement).

From here, the idea is very simple: Newton thought that if absolute space did not exist, the Newtonian laws of motion would not hold. On Lange’s view of laws, if one adds to this the two claims that (a) the Newtonian laws of motion are laws of nature, and (b) the existence of absolute space is logically contingent, then one gets the conclusion that it is a law of nature that absolute space exists. (Newton would not, of course, have called this a law of nature, and it is unclear – to me at least – whether Newton thought absolute space was logically contingent, but this is beside the point.) Lange thinks that, if Newtonian physics were true, then this would constitute a non-causal scientific explanation of why there is something than nothing. In fact, Newtonian physics is not true but, Lange thinks, it is nevertheless plausible, perhaps even likely, that an explanation of this general form is the correct explanation of why there is something rather than nothing.

(Cross-posted at blog.kennypearce.net.)

Kotzen on the Improbability of Nothing
February 26, 2014 — 18:36

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , ,   Comments: 5

When someone asks ‘why p rather than q?’, it is sometimes a good answer to say, ‘p is far more probable than q.’ When someone asks, ‘why is p more probable than q?’, it is sometimes a good answer to say, ‘there are many more ways for p to be true than for q to be true.’ According to a well-known paper by Peter Van Inwagen, the question ‘why is there something rather than nothing?’ can be answered in just this fashion: something is far more probable than nothing, because there are infinitely many ways for there to be something, but there is only one way for there to be nothing. In his contribution to The Puzzle of Existence, Matthew Kotzen argues that, this sort of answer is only sometimes a good one, and that we cannot know a priori whether it is a good answer to the question of something rather than nothing.

Kotzen’s general line of response is a standard one: he argues that there are many possible measures, and not all of them assign probability 0 to the empty world. Van Inwagen is perfectly aware of this problem, but argues that a priori considerations allow us to select a natural measure. Kotzen’s strategy is to identify some everyday examples where this pattern of explanation looks good, and some where it looks bad, and show that van Inwagen’s a priori considerations don’t draw the line between good and bad in the right place. Furthermore, he argues (p. 228) that van Inwagen’s considerations may not actually be sufficient to assign unique probabilities in the relevant cases, since it is not always clear what space the measure should be assigned over.

I think Kotzen’s argument against van Inwagen is quite compelling. The best thing about Kotzen’s article, though, is that it does a great job explaining these complex issues at a moderate level of rigor and detail without assuming hardly any background. This would be a great article to assign to undergraduate students.

In the rest of this post, I’m going to do two things. First, I’m going to explain the issue about measures in a much lesser level of rigor and detail than Kotzen does, just to make sure we are all up to speed. Second, I am going to raise the question of whether van Inwagen’s argument might have an even bigger problem: whether, instead of too many equally eligible measures, there might be none.

The simplest, most familiar, cases where the probabilistic pattern of explanation with which we are concerned works are finite and discrete. This is the case, for instance, with dice rolls or coin flips. The coin either comes up heads or tails; each die shows one of its six faces. So then, as one learns in one’s very first introduction to probabilities, in the case of the dice roll, the probability of any particular proposition about that dice roll is the number of cases in which the proposition is true divided by the total number of possible cases (for two six-sided dice, 36). In real life, by dividing the outcomes into discrete cases like this, we care about certain factors (which face is up) and not about others (e.g., where on the table the dice land). This division into discrete cases is called a partition. The reason the probabilities are so simple in the dice case, with each case in the partition being equally likely, is because we chose a good partition. (Well, actually, it’s because a fair die is defined as one that makes each of those outcomes equally probable, but let’s ignore that for now and imagine that fair dice just occur in nature rather than being made by humans on purpose.) Suppose that, on one of our dice, the face with six dots is painted red rather than white and, for some reason, what we really care about is whether the red face is up. Well then we might partition the outcomes accordingly, into the red outcome and the non-red outcomes. But these two cases (red and non-red) are not equally probable.

Sometimes the thing we care about is not a discrete case like this, but a fundamentally continuous case like (in a standard example) where on a dartboard a perfectly thin dart lands. A measure is basically the equivalent, in this continuous case, of the partition in the discrete case. For the dart board, there is a natural measure, one that ‘just makes sense’, and this is provided by our ordinary spatial concepts. So if, for instance, the bullseye takes up 1/10 of the area of the dartboard then, if the dart is thrown randomly, it will have a 1/10 chance of landing there. (Again, this is really just what it means for the dart to be thrown randomly.) This isn’t the only possible measure, but it’s the one that, in some sense, ‘just makes sense.’ But the question is, is there a natural measure on the space of possible worlds? That is, is there some ‘correct’ or ‘sensible’ or ‘natural’ way of saying how ‘far apart’ two possible worlds are? This is far from clear. The Lewis-Stalnaker semantics for counterfactuals supposes that we can talk about some worlds being ‘closer together’ than others, but this is not enough to define a measure. Furthermore, Lewis, at least, thinks that the closeness of worlds might change based on contextual factors (which respects of similarity we most care about), so it seems like there’s a plurality of measures there. Perhaps one could claim that all of these reasonably natural measures agree in assigning nothing probability 0, but that’s not clear either. For instance, Leibniz seems to think that one reason why the existence of something cries out for explanation is that “a nothing is simpler and easier than a something” (“Principles of Nature and Grace,” tr. Woolhouse and Francks, sect. 7). So maybe we should adopt a measure in which worlds get lower probability the more complicated they are. (I think Swinburne might also have a view like this.) On this kind of view, the empty world (if there is such a world) will be the most probable world. So the plurality of measures seems like a problem.

It’s not the only problem, though. Kotzen notes that “the Lebesque measure can be defined only in spaces that can be represented as Euclidean n-dimensional real-valued spaces” (222). (The Lebesgue measure is the standard measure used, for instance, in the dart board case: the bigger space it takes up the bigger its measure.) But the space of possible worlds is not like this! David Lewis has argued that the cardinality of the space of possible worlds must be greater than the cardinality of the continuum (Plurality of Worlds, 118). The reason is relatively simple: suppose that it is possible that there should be a two-dimensional Euclidean space in which every point is either occupied or unoccupied. The set of possible patterns of occupied and unoccupied points in such a space (each representing a distinct possibility) will be larger than the continuum. But if this is right, then there can be no Lebesgue measure on the possible worlds because there are too many worlds. Even if this exact class of worlds is not really possible (for reasons such as the considerations about space in modern physics I raised last time) it seems likely that there are too many worlds for the space of possible worlds to have a Lebesgue measure. Yet Kotzen attributes to van Inwagen that view “that we ought to associate a proposition’s probability with its Lebesgue measure in the relevant space” (227).

Maybe van Inwagen is not in quite this much trouble. He doesn’t actually seem to say anything about a Lebesgue measure in the paper, so I’m not sure exactly why Kotzen thinks van Inwagen is committed to this. In fact, in the paper Kotzen is discussing, van Inwagen cites his earlier discussion in Daniel Howard-Snyder’s collection, The Evidential Argument from Evil. In endnote 3 (pp. 239-240) of that article, van Inwagen says “the notion of the measure of a set of worlds gets most of such content as it has from the intuitive notion of the proportion of logical psace that a set of worlds occupies.” I find it a little bit ironic that van Inwagen says this, because he’s always denying that he has intuitions about things! I don’t have intuitions about proportions of logical space. In any event, it seems to me that van Inwagen is here disavowing the project of giving a well-defined measure in the mathematician’s sense.

Suppose one did want to identify a natural measure that was well-defined in the mathematician’s sense. I’m not sure about all the technicalities of trying to do this for sets of larger-than-continuum cardinality, and whether it can be done at all. Even if it can, thought, it’s going to be hard to say that one measure is more intuitive or natural than another in such an exotic realm. Things might be even worse: Pruss thinks (PSR, p. 100) that, for any cardinality k, it is possible that there be k many photons. If this is true, then there is a proper class of possible worlds, and one certainly can’t define a measure on a proper class. (This is another thing I don’t think I have intuitions about.)

All this to say: anyone who wants to assign a priori probabilities to all propositions (as van Inwagen does) is fighting an uphill battle, but if such probabilities cannot be assigned, then it does not seem that the probabilistic pattern of explanation can be used to tell us why there is something rather than nothing.

(Cross-posted at blog.kennypearce.net)

Jacob Ross on the PSR
December 20, 2013 — 10:47

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , , , , , , ,   Comments: 3

Leibniz famously claimed that, once we have endorsed the Principle of Sufficient Reason, “the first questions we will be entitled to put will be – Why does something exist rather than nothing?” The answer to this question, he further claimed, “must needs be outside the sequence of contingent things and must be in a substance which is the cause of this sequence, or which is a necessary being, bearing in itself the reason for its own existence, otherwise we should not yet have a sufficient reason with which to stop” (“Principles of Nature and Grace,” sects. 7-8, tr. Latta). In his contribution to The Puzzle of Existence, Jacob Ross argues, on the contrary, that the PSR entails that one never reaches “a reason with which to stop.”

Consider the following modal collapse argument, which is somewhat simpler than the version Ross discusses:

  1. For every true contingent proposition, there is an explanation of why that proposition is true. (Assumption for reductio)
  2. Any conjunction of true contingent propositions is itself a true contingent proposition.
  3. The truth of a conjunctive proposition cannot be explained by one of its conjuncts.
  4. There is a conjunction of all true contingent propositions.
  5. A true contingent proposition can only ever be explained by another true contingent proposition.
  6. Therefore,

  7. The conjunction of all true contingent propositions is an unexplained true contingent proposition, contrary to (1).

Now Ross’s strategy is to deny (4). This is a well-known move in the dialectic around the argument from contingency for the existence of a necessary being, which has its roots in Kant. But Ross has interesting things to say about two points: first, what reason can be given for denying (4)? Second, what are the metaphysical consequences of accepting some version of the PSR (such as (1) of the argument) while denying (4)?

On the first point, I’m afraid Ross is a little unclear. He starts by arguing that, since explanation is a hyperintensional notion, a fine-grained (hyperintensional) conception of propositions is needed here. So far so good. But here’s the part I’m puzzled by:

suppose we adopt [a fine-grained] account [of propositions] and regard propositions as consisting in, or at least representable by, an ordered series of constituents corresponding to the constituents of the sentences by which they would be expressed in a canonical language. On such an account, for every proposition, there will be a corresponding set of the constituents of this proposition. And a conjunction will have its conjuncts as constituents. And so it follows that for every proposition, there will be a set that includes all of its conjuncts (p. 84).

Following this, Ross adverts to an argument of Pruss’s for the claim that the collection of all propositions is a proper class, and shows how to excise a certain controversial assumption (that for any cardinality k, possibly there are exactly k many concrete objects) from that argument. From this argument, he concludes that there is no ‘Grand Conjunction,’ i.e. that there is no such proposition as the conjunction of all contingent truths.

Here’s why I’m puzzled. Ross’s conclusion follows directly from his conception of propositions. Indeed, it follows directly from Ross’s conception of propositions that propositions have at most countably many constituents, for an ordered series (at least in the standard mathematical sense) can have at most countably many elements. So the first puzzle is why Ross presents this argument for the existence of a proper class of contingent propositions without noting that all he actually needs is uncountably many of them. The second puzzle is that Ross gives no argument in favor of his particular notion of a proposition, and in his exposition he says things like “suppose we adopt” and so forth. Then at the end of the section, he concludes that there is no Grand Conjunction. In other words, it appears that Ross begs the question: he asks us to grant a certain supposition from which his conclusion trivially follows, namely, that the existence of a conjunctive proposition requires the existence of the ordered series of its conjuncts.

I think the best response to be made on Ross’s behalf is this. He does provide arguments (compelling ones, even) in favor of adopting some hyperintensional conception of propositions. Now, there simply aren’t a lot of well-developed hyperintensional theories of propositions on the market. So the opponent of Ross’s argument needs to articulate some alternative hyperintensional conception of propositions if she wants to hold onto the existence of the Grand Conjunction. This seems fair enough to me, but then I was already somewhat skeptical of infinite propositions.

After arguing against the Grand Conjunction, Ross considers some other principles that might be thought to create problems, such as the modal collapse problem, for the PSR. These principles are all designed to say the some basic fact about contingent beings – e.g., that there are some of them – can only be explained if there is a necessary being. Ross rejects the Hume-Edwards principle and endorses the following claim:

(K4) For any set S of beings, the proposition that there exists at least one member of S can be explained only by a proposition that appeals to the existence of beings that are not in S (p. 89).

Ross notes that, since there is no set of all beings (sets are beings, and there is no set of all sets), (K4) cannot be made to yield the contradiction, there is a being that is not a being. On the other hand, though, it is extremely plausible to suppose that there is a set of all concrete contingent beings and, by (K4) this set must be explained by some non-member of it. This might sound at first like it would be nice for the theist; unfortunately, if there is a set of all concrete contingent beings and God exists, then surely there is a union of the set of all contingent concrete beings with the singleton {God}. Bad news.

If (K4) is restricted to sets of contingent beings then, together with the PSR and the claim that there is a set of all contingent concrete beings, it entails the existence of a necessary being; if it’s not restricted to sets of contingent beings, then it requires a proper class of beings standing in explanatory relations to one another (no regress-stopper can be introduced). Ross holds that, because of skepticism about the possibility of necessary things explaining contingent things, the defender of the PSR has cause to be skeptical of the claim that there is a set of all contingent concrete beings (p. 93). Thus, Ross thinks, the defender of the PSR should grasp the second horn and believe in a proper class of contingent concrete beings and an infinite regress of explanatory relations.

Much in Ross’s essay is clearly turning on the assumption that the existence of contingent beings cannot be explained in terms of a necessary being. This is an assumption most defenders of the PSR have rejected. However, Ross provides a quite interesting exploration of the kind of view one might be driven to if one endorsed this assumption while also endorsing the PSR, and he shows that such a view need not be self-contradictory, at least in any obvious way.

(Cross-posted at blog.kennypearce.net)

Kleinschmidt on the Principle of Sufficient Reason
December 15, 2013 — 17:19

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , , , , , , ,   Comments: 4

Philosophers have perhaps more often assumed the Principle of Sufficient Reason than argued for it. Furthermore, this assumption has, in recent years, fallen out of favor due to the PSR’s allegedly unacceptable consequences. Recently, however, the PSR has been defended by Alexander Pruss and Michael Della Rocca. Pruss and Della Rocca both argue that (a version of) the PSR is a presupposition of reason. Pruss defends a version of the PSR restricted to contingent truths and consistent with libertarian free will and indeterminism is physics as a presupposition of our scientific and ‘commonsense’ explanatory practices. Della Rocca argues that the metaphysicians who deny the PSR implicitly make use of an unrestricted PSR, applying even to necessary truths, in other metaphysical arguments. Both arguments depend crucially on the claim that there is no weaker principle which is non-ad-hoc and justifies the relevant practices. In her contribution to The Puzzle of Existence, Shieva Kleinschmidt argues that both defenses fail.

Kleinschmidt’s general strategy is to outline contrasting cases – those in which admitting in-principle inexplicability seems to be an option, and those in which it does not – and argue that a non-ad-hoc descriptive account of this distinction can indeed be given.

Kleinschmidt’s primary focus is on Della Rocca, but compared to Pruss Della Rocca gives weaker support to a stronger conclusion. Della Rocca argues that if the unrestricted PSR is not true, then we cannot justifiably rule out certain metaphysical positions which we find intuitively implausible. However, not everyone finds the ‘brutal’ or ‘primitivist’ positions unpalatable in the way Della Rocca supposes (see Markosian). Furthermore, it would not be the end of the world if we were forced to conclude that many of the epistemic practices of analytic metaphysicians are in fact unjustified. Pruss, on the other hand, argues from commonsense and scientific explanatory practices. He asks, for instance, why it is that, when investigating a plane crash, no one takes seriously the hypothesis that the plane crashed for no reason at all. A position that undermined this kind of ordinary, everyday explanatory practice would be in much bigger trouble than a position that said analytic metaphysicians were out to lunch.

Now, Kleinschmidt does talk about the kind of everyday cases with which Pruss is concerned: “For instance,” she writes,

suppose we find small blue handprints along the wall, and we notice that the blue frosting is gone from its bowl and some is on the hands, face, and torso of a nearby five-year-old. When wondering what happened, we will not be tempted even for a moment by the alternative the child wishes to bring to our attention, namely, that the handprints are on the wall for no reason, that they are simply there (p. 67).

Again, someone who was forced to deny that our ordinary process of explaining the handprints was well-justified would be in much bigger trouble than someone who thought our metaphysical reasoning defective. Perhaps the reason for this is that Kleinschmidt herself belongs to the group of metaphysicians targeted by Della Rocca’s argument.

Della Rocca complains that these metaphysicians use the PSR when it suits them and ignore it the rest of the time. Kleinschmidt, however, thinks that this alleged inconsistency shows that Della Rocca has misunderstood the methodology employed by these metaphysicians, for there are indeed cases where (at least some of) these metaphysicians are willing to accept unexplained (and unexplainable) facts (whether necessary or contingent). These hypotheses are not ‘off the table’ in the way the hypothesis that the blue frosting is on the wall for no reason is off the table. In particular, Kleinschmidt describes in detail two contrasting cases: in standard fission cases, the view that it is simply a brute fact that either Lefty or Righty is identical with the pre-fission individual is rarely taken seriously, but in the Problem of the Many, especially as applied to human bodies, brute fact views have been more popular.

This, however, does not get to the bottom of things, for the common core of the arguments of Pruss and Della Rocca is the contention that no weaker principle than the PSR will justify our practice of treating these hypotheses as off the table in the cases where we do so. In other words, if we reject the PSR, then we ought to take the hypothesis that the blue handprints are on the wall for no reason seriously, but surely we ought not to take that hypothesis seriously, so we’d better accept the PSR.

It is only in the last three pages of her chapter that Kleinschmidt addresses this contention directly. She proposes that the claim that explanatory power is a truth-tracking theoretical virtue is sufficiently strong to account for our explanatory practices. “So, for instance, in the handprint case, we reject the theory that the handprints simply appeared for no reason, because we can see how some explanations might go, and some of the explanations are such that endorsing them won’t have disastrous consequences” (77). This, she argues, explains our explanatory practices: we take explanatory power to be a very important virtue in theory choice, so that we do not accept theories that render certain phenomena inexplicable unless we are backed into a corner.

As Kleinschmidt recognizes, this is really only the beginning of a response to Pruss and Della Rocca, for the core problem is not one of description but one of justification. Della Rocca, for instance, explicitly admits that metaphysicians are not consistent in rejecting unexplainables; this is precisely his complaint. He says that this inconsistent practice cannot be justified. Kleinschmidt recognizes this problem, but all she has to say about it is that there is considerable difficulty, as well, regarding the other features (e.g., parsimony) we take to be truth-tracking theoretical virtues.

Insofar as Kleinschmidt has helped to make clearer what our actual explanatory practices are, and shown that a descriptive account need not be radically disunified and ad hoc, this is progress. But the fact is, it is not really an answer to the Pruss-Della Rocca argument for, unless the treatment of explanatory power as a truth-tracking theoretical virtue can itself be justified, no method of justifying our explanatory practices in the absence of the PSR has been made to appear. On the other hand, perhaps Kleinschmidt should be regarded as having shown that those who continue to be untroubled scientific and/or ontological realists despite recognizing the difficulties involved in explaining why the features we regard as theoretical virtues should be regarded as truth-tracking might as well continue to be untroubled deniers of the PSR despite recognizing the difficulties raised by the Pruss-Della Rocca argument, for those difficulties are, essentially, the same. On the other hand, the reasonableness of this untroubled attitude could certainly be called into question.

Finally, it should be noted that Kleinschmidt’s formulations of the virtue of explanatory power are quite strong. She says we are willing to accept unexplainable propositions only when the consequences of refusing to do so are ‘disastrous.’ Now, unless one thinks either (a) that positing a necessary being is itself disastrous, or (b) that contingent facts cannot be explained in terms of a necessary being (i.e. that the modal collapse problem cannot be solved), this principle will still be strong enough to support the argument from contingency for the existence of a necessary being. (Personally, I think (a) is silly but (b) presents a deep and tangled problem.) In short, it seems likely that, even if we accept Kleinschmidt’s conclusion, we can still overcome the parsimony worries I discussed last time.

(Cross-posted at blog.kennypearce.net.)

Oppy on Theism, Naturalism, and Explanation
December 9, 2013 — 21:51

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , , , , , ,   Comments: 19

In his contribution to Goldschmidt’s The Puzzle of Existence, Graham Oppy argues that, “as [a] hypothes[i]s about the contents of global causal reality” (p. 51), naturalism is ceteris paribus preferable to theism. Oppy’s strategy for defending this claim is to consider three hypotheses about the structure of global causal reality, and argue that naturalism is superior to theism on each hypothesis. Here are his three hypotheses:

  1. Regress: Causal reality does not have an initial maximal part. That is, it is not the case that there is a part of causal reality which has no parts that stand in causal relations to one another and (b) is not preceded by some other part of causal reality which has no parts that stand in causal relations to one another.
  2. Necessary Initial Part: Causal reality has an initial maximal part, and it is not possible that causal reality had any other initial maximal part. On the assumption that the initial maximal part involves objects, both the existence and the initial properties of those objects are necessary.
  3. Contingent Initial Part: Causal reality has an initial maximal part, but it is possible that causal reality had some other initial maximal part. On the assumption that the initial maximal part involves objects, at least one of the existence and the initial maximal properties of those objects is contingent (p. 49).

According to Oppy, given Regress theism has no explanatory advantage over naturalism, since both appeal to infinite regress, but naturalism is more parsimonious than theism, hence it is preferable.

The idea that causal reality has an initial part, whether necessary or contingent, might be thought most favorable to theism, but Oppy thinks the case here is really no different than Regress. The reason for this is simple: he doesn’t see why an initial supernatural state is any better, from an explanatory perspective, than an initial natural state (regardless of whether we take the initial state to be necessary or contingent). So, from an explanatory perspective, the hypotheses are again equal, but from a simplicity perspective naturalism wins again.

In my last post, I promised to return to O’Connor’s discussion of the ‘all things considered’ preferability of theism to naturalism. O’Connor concedes Oppy’s claim (in previous work) that naturalism is preferable in terms of parsimony, but insists that “Naturalism simply is not a rival explanatory scheme for existence to Theism” (p. 39). In other words, naturalism, according to O’Connor, does not even try to explain what theism tries to explain. What Oppy gives in his article here is an “anything theism can do naturalism can do better” retort. If the theist posits a necessarily existing supernatural being, naturalism can posit a necessarily existing natural state/being. If the theist posits a contingently existing supernatural being, the naturalist can posit a contingently existing natural being.

Now, as Oppy concedes (p. 51), there is some difficulty about this natural/supernatural distinction. But what Oppy essentially has in mind, is that we are better of positing ‘more of the same’ than positing something totally different (like a God).

Oppy’s key point is that positing God as one more ‘billiard ball’ in the sequence of causes studied by science yields no explanatory advantage. Surely he is right about this. As long as God is considered as one more billiard ball, we are better off with a natural billiard ball than a supernatural one. In my view, insofar as O’Connor is considering God as a cause among causes (and he seems to be), Oppy’s critique is devastating.

However, the point that there is no explanatory advantage to positing God as one more billiard ball was already recognized by classical theistic metaphysians such as Aquinas and Leibniz. This is, after all, precisely the point of the traditional distinction between primary and secondary causation: God is not a cause among causes, but rather stands outside the secondary causal sequence and makes that sequence, rather than another, actual. As has long been recognized, this is consistent with the sequence of secondary causes being either finite or infinite, for even if there was an infinite sequence, we could ask, ‘why that sequence and not another?’ and we could still answer, ‘because God so chose.’

Oppy will quite rightly respond that it is incumbent on the theist to render this notion of ‘primary causation’ intelligible. However, recent work in analytic metaphysics on ‘grounding’ and ‘building relations’ (as Karen Bennett calls them) suggests that this can be done. In brief, it is now (again) recognized that there are a plurality of metaphysical relations that can ground explanation. The theist wants to say that this causal sequence exists because God chose it. This ‘because’ need not signify the same causal relation by which (literal or metaphorical) billiard balls are regularly related to one another. Just exactly what the theist should take primary causation to be, and exactly how it should be seen as relating to other grounding or building relations, is an interesting topic for further research. But the long and short of it is, even if not much can be said about exactly what primary causation is, if primary causation is a species of building relation, and we understand building relations in general, and we are independently committed to a plurality of them, then it seems to me that the ideological cost of believing in primary causation is not so great as to offset the benefit of explaining something the naturalist doesn’t even try to explain: namely, why this causal sequence is actual.

Now, that theism can overcome this ideological cost is not enough to show that it is preferable, for this is not the only cost of theism. God is supposed to be a really (fundamentally) existing entity, and hence positing a God is itself an ontological cost. If God is a sui generis entity in a fairly strong sense (as opposed to, for instance, to literally being a mind), then there is also a significant ideological cost here. One alternative is to posit some necessary laws of nature (or something like that) to make the causal sequence go the way it does, but if one uses the word ‘God’ in such a way that ‘impersonal God’ is not a contradiction in terms, then this sounds like an impersonal God. Let’s set that aside. There’s a more basic issue to concern us. One way or another, we’re paying a lot to get an explanation of why this causal sequence is actual. If, as Shieva Kleinschmidt argues in the very next chapter, the Principle of Sufficient Reason is false and explanatory comprehensiveness is merely one theoretical virtue among many, then perhaps the cost is greater than we should be willing to bear. More on this next time.

(Cross-posted at blog.kennypearce.net.)

O’Connor on Explaining Everything
December 6, 2013 — 17:31

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , , , , ,   Comments: 0

Goldschmidt’s volume opens with an essay by Timothy O’Connor who defends the traditional answer to the question of why there is something rather than nothing: God. More specifically, the traditional answer O’Connor defends holds that a necessarily existent immaterial agent chose that contingent beings should exist.

There are several well-known difficulties for this kind of view. The first difficulty is, if there must be an explanation of why there are contingent beings, then mustn’t there be an explanation of why there is a God? This is, of course, a version of the much-ridiculed ‘what caused God?’ retort, and O’Connor’s (implicit) answer to it is that God exists necessarily. (O’Connor implies this response by restricting his ‘principles of explanation’ to contingent beings/events/truths; pp. 35-37.) Now this (standard) answer can be understood in one of two ways: either necessary truths don’t need explanations, or else we claim that any necessary truth p is explained by the fact that necessarily p. That is, on the second option, you explain a necessary truth by asserting that it is necessary. However, the second option by itself doesn’t solve the problem, because we can always ask why it is that God necessarily exists. Based on O’Connor’s discussion of ‘opaque necessities’ I suspect that he endorses the first option, denying that necessary truths need explanations. (To me, brute necessities seem intuitively worse than brute contingencies, but I won’t pursue that point here.) So God’s existence, being necessary, doesn’t need an explanation, but the existence of contingent things does.

However, the opponent of the traditional (theistic) view has an easy retort: “Suppose we grant, for the sake of argument, that God exists necessarily. Surely God’s decision to create this world must be contingent, since the world could have been otherwise. So there must be an explanation of why God chose this world.” We actually still haven’t got much deeper than the ‘what caused God?’ question at this point, for there is quite an obvious answer to this challenge. According to the traditional view, the universe’s existence depends on a free choice, and we know how to explain free choices: we cite the agent’s reasons, desires, character, etc.

In traditional treatments of this issue (e.g., Aquinas, Leibniz), the theist would now go on to give some account of the reason why God created this world. O’Connor makes a different move: he argues that the theist need not do this. According to O’Connor, the superiority of theism over its competitors is shown by the fact that it provides an intelligible explanation schema: that is, we can see how an explanation could go, and what sorts of questions would have to be answered in order to complete the explanation.

O’Connor seems to me to be correct that a hypothesis which implies that something is in principle explicable, and specifies a particular sort of explanation it must have, is ceteris paribus to be preferred over a hypothesis which renders that thing in principle inexplicable. This is so even if the hypothesis doesn’t actually explain the phenomenon in question. Now, it is widely held that the existence of contingent beings is in principle inexplicable unless there is a necessary being. Further, since we have some kind of conception of how agential explanations go, the hypothesis that contingent existence is caused by a necessarily existent agent is ceteris paribus to be preferred to the hypothesis that no necessary beings enter into causal relations.

Two important limitations must be observed here. First, no argument has been presented for the claim that the conception of the necessary being as an agent is superior to alternative necessary being theories. Second, the result is merely a ceteris paribus claim. O’Connor accepts both of these limitations, though he does give some consideration to the question of how an all-things-considered comparison of the two views might go. On this latter point, he is criticized by Oppy in the following chapter, so I will leave off discussion of that until my next post.

I should also briefly mention O’Connor’s response to the modal collapse objection. This objection holds that whatever has a necessary explanation is itself necessary, and so the traditional view, far from explaining contingency, denies the existence of contingency. O’Connor’s response is simple: to cite a cause of something is to give one kind of explanation of it, and that’s the kind of explanation he thinks contingent existence needs. Not all causation involves ‘necessary connection.’ Hence, a necessary thing might contingently cause contingent things, and this would not take away their contingency. (O’Connor does not here discuss the regress worry: not only is the proposition this world exists contingent, so is the proposition God causes this world. What’s the explanation of the second proposition? Since O’Connor has written a lot about agent causation, I’m sure he’s discussed this somewhere.) O’Connor thinks that if you are unsatisfied with this it must be because you are looking, as Leibniz was, for a contrastive explanation, an explanation of why things are so rather than otherwise. O’Connor is happy to deny that such explanations exist.

I’m a little concerned about this response; I tend to think that if one has explicability intuitions strong enough to support the argument from contingency, one is unlikely to be satisfied by weak explanations of this sort.

On the whole, O’Connor’s essay is a competent presentation of the traditional view in the context of contemporary analytic philosophy. He departs from the traditional view mostly in his exhortations to epistemic humility. In a way, this essay was a good choice to begin the volume: it lays out the view that most of the other papers will be, in one way or another, attacking. On the other hand, I found each of the three following essays (by Oppy, Kleinschmidt, and Ross – that’s as far as I’ve read) far more interesting. For the specialist, O’Connor’s essay is rather a slow start to the volume.

(Cross-posted at blog.kennypearce.net.)

Theistic frequentism and evolution
September 29, 2013 — 12:26

Author: Alexander Pruss  Category: Concept of God Divine Providence  Tags: , , ,   Comments: 13

As I have argued elsewhere, it is very difficult to reconcile the idea that God intentionally designed human beings with the statistical explanations we would expect to see in a completed evolutionary theory. One might respond that our current evolutionary theory is not thus completed, but it would be nice to have a story that would fit even with a future completed theory. I now offer such a solution, albeit one I am not fond of.

Suppose first that God determines (either directly or mediately) every quantum event in the evolutionary history of human beings. Suppose further that physical reality is infinite, either spatially or temporally or in the multiverse way, in such wise that the quantum events in our evolutionary history can be arranged into a fairly natural infinite sequence and given frequentist probabilities

So far this is a simple and quite unoriginal solution. And it is insufficient. A standard problem with frequentist accounts is that they get the order of explanations wrong. It is central to a completed evolutionary story that the probabilistic facts explain the arising of human beings. But if the probabilistic facts are grounded in the sequence of events, as on frequentism they are, then they cannot explain what happens in that sequence of events. Some Humeans are happy to bite the bullet and accept circular explanations here, but I take the objection to be very serious.

However, theistic frequentism has a resource that bare frequentism does not. The theistic frequentist can make probability facts be grounded not in the frequencies of the infinite sequence of events as such, but in God’s intention to produce an infinite sequence of events with such-and-such frequencies and to do so under the description “an infinite sequence of events with such-and-such frequencies.” This requires God to have a reason to produce a sequence of events with such-and-such frequencies as such, but a reason is not hard to find–statistical order is a genuine kind of order and order is valuable.

The theistic frequentist now has much less of a circularity worry. It is not the infinite sequence of events that grounds the probabilities that are, in turn, supposed to explain the events within the evolutionary sequence. Rather, it is God’s intention to produce events with such-and-such frequencies that grounds the probabilities, and the events in the sequence can be non-circularly explained by their having frequencies that God had good reason (say, based on order) to produce.

more…

Three Responses to the Argument from Contingency
August 28, 2013 — 13:53

Author: Kenny Pearce  Category: Existence of God  Tags: , , , ,   Comments: 61

In my view, the cosmological argument from contingency is the most powerful philosophical argument for the existence of God. By a ‘philosophical’ argument, in this context, I mean a way of giving reasons for something that does not depend on detailed empirical investigation, or on idiosyncratic features of a particular individual’s experience or psychology. Thus I do not hold that the argument from contingency is the best reason anyone has for believing in God. I think, for instance, that some people have had religious experiences which provide them with stronger reasons than the argument from contingency could, even making very generous assumptions about their ability to grasp the argument.
This belief of mine, about the relative strengths of the arguments, goes rather against the grain of current discussions. I think the recent philosophy of religion literature overestimates the power of the fine-tuning argument and the first cause argument, and underestimates the power of the argument from contingency.

more…

Explaining Molinist conditionals
April 12, 2013 — 20:55

Author: Alexander Pruss  Category: Molinism  Tags: ,   Comments: 16

I remember David Manley (who I think was a first year grad student at the time) querying Al Plantinga over a meal whether counterfactuals of creaturely freedom (CCFs) could be explained. I think Al didn’t have an answer but thought it was a really good question.

I may finally have an answer to David’s question. I think that the Molinist should answer in the affirmative if and only if non-derivatively free actions have explanations.

Suppose w0 is the actual world. Consider the conditional C→A, where C says that Curley has such-and-such character and is offered a $5000 bribe at t0, and A says that he freely accepts the bribe at t0. Suppose w1 is a sufficiently close-by world where C and A are true. Now let’s put ourselves in w1. So, Curley freely accepts the $5000 bribe. Does this have an explanation? If not, then a fortiori I think we should not have said in w0 that C→A had an explanation. After all, if it has an explanation in w0, it surely doesn’t lose one in w1, just because C holds there. But it would be just too weird that in w1, C→A has an explanation but A does not, especially if, as will at least typically be the case, C has an explanation.

Conversely, suppose that in w1, A has an explanation. What kind of an explanation is that? The most plausible candidate for an explanation of a free action is in terms of non-necessitating reasons and character. Maybe, in w1, what explains A is that Curley is very greedy. But that Curley is very greedy is a part of C. So it seems very reasonable to say at w0 that what explains C→A is that were C to hold, Curley would be very greedy (a necessary truth, since C includes a description of Curley’s character). Now you might say: Yeah, but that he would be greedy in C doesn’t entail or maybe even make likely that he would take the bribe. But the very same point holds in w1: that he is greedy doesn’t entail or maybe even make likely that he takes the bribe–yet, we supposed, it explains it. If we accepted the explanation of the categorical claim in w1, we should accept the corresponding explanation of the conditional claim in w0, if w1 is close enough to w0.

Molinism, presentism, explanation and grounding
March 18, 2013 — 12:51

Author: Alexander Pruss  Category: Molinism  Tags: , , ,   Comments: 2

Fundamental Molinist conditionals of free will about non-existent agents are brutish: they are not grounded in other propositions, nor made true by a truthmaker, lack of a falsemaker and/or the obtaining of properties/relations between entities.

Now, suppose as seems plausible to me that there are precisely two kinds of explanation: constitutive-style and causal-style explanations. Constitutive-style explanations explain a truth by explaining how the truth is grounded: the knife is hot because its molecules have high kinetic energy. Causal-style explanations explain a truth by giving non-grounding conditions that nonetheless in a mysterious but familiar causal or at least causal-like give rise to the holding of the truth.

Now, brutish truths have no constitutive-style explanations. For the constitutive-style explanation involves the describing of a grounding. But brutish truths also have no causal-style explanations. For causal-style explanations involves the describing of causal-style relations between the aspects of the world (in the concrete sense) that ground the explanandum and explanans. (In fact, for this reason, brutish truths not only lack causal-style explanations but are not causal-style explanations for anything else.) So, brutish truths have no explanations.

But if there are true fundamental Molinist conditionals of free will about non-existent agents, there will also be ones that have explanations. For, some, maybe all, free actions can be explained in terms of the reasons the agent had. Thus, Curley accepts the bribe because he wants to be richer. Granted, this is a non-necessitating explanation–that Curley wants to be richer does not entail that he accepts the bribe. But that’s still an explanation, and one of causal-type. And exactly parallel explanations can be given for Molinist conditionals. Thus, Curley would have accepted the bribe in circumstances C because circumstances C includes his wanting to be richer. And presumably this kind of explanation would have held even had Curley never existed, and presumably if Molinism is true, there are such explanations for true conditionals about actually non-existent agents. Thus some fundamental Molinist conditionals of free will about non-existent agents can be explained. But this contradicts their brutishness.

Moreover, presumably some fundamental true Molinist conditionals of free will about non-existent agents explain God’s creative inactions. Thus, perhaps, God did not create Badolf Bitler, because Bitler would have been so much worse than Hitler. But these conditionals do not provide a constitutive-style explanation for such actions. So they provide a causal-style explanation. But they can’t do that, because they’re brutish.

The same argument goes against Merricks-style presentism on which fundamental truths about the past are brutish. But many, perhaps all, fundamental truths about the past are explained by other fundamental truths about the past.

more…