McDaniel’s Ontological Pluralism and the Puzzle of Existence
March 6, 2014 — 23:52

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , ,   Comments: 0

The very last essay in The Puzzle of Existence is the article by Kris McDaniel which examines the bearing of ontological pluralism on the question, why is there something rather than nothing?

Ontological pluralism, as McDaniel uses that term, is the thesis that there is more than one kind of being, existence, or reality. (McDaniel usually prefers the term ‘being,’ but seems to use ‘existence’ and ‘reality’ as synonyms.) This is not simply the trivial thesis that there are many different kinds of beings (i.e., that there are things of many different kinds), and it is not a metaphysically deflationary thesis like Eli Hirsch’s theory of ‘quantifier variance.’ Rather, McDaniel takes it to be a deep metaphysical fact about the world that there is a plurality of existence-like attributes things can have. In other words, McDaniel is pluralism is a version of what has sometimes been called a ‘thick’ conception of ontology, a conception on which ontology seeks not only to tell us what things there are, but also to make substantive, informative claims about what being is.

McDaniel has written extensively about this elsewhere, but he does an excellent job of summarizing his view in this short essay before showing how it bears on the question of why there is something rather than nothing. McDaniel uses the term ‘being’ for the ‘semantic value’ of the English existential ‘is’ or ‘exists’ which, he says, is “either a property of properties or a kind of relation between properties” (271). Presumably he has in mind something along the lines of Frege’s view that ‘Fs exist’ should be analyzed as F-ness is instantiated, attributing the property being instantiated to the property F-ness. He uses the term ‘modes of being’ for the semantic values of other terms which function syntactically and inferentially like the existential quantifier, and he argues that some other modes of being are more fundamental than being, that is, that some things exist (or could exist) in a more ‘full-blooded’ sense than that expressed by the English existential ‘is.’

McDaniel’s project in this paper is to determine whether, given ontological pluralism, there is a version of the ‘why is there something rather than nothing?’ question which is both well-posed and ontologically deep. Like many other important questions, this one depends on what the meaning of ‘is’ is.

McDaniel starts by considering the possibility that the question might be about being proper, i.e., about why there is something in the plain English sense of ‘is’ and not some ‘Ontologese’ sense. McDaniel argues that plain English recognizes the existence of such things as absences and omissions, and even takes them quite seriously: they can be counted, they can be causes, they can be classified into kinds, and so forth (277). Under what conditions does an absence exist? McDaniel is not totally sure about this, but he suggests that perhaps our conventions may be such “that an absence of Fs exists when there are no Fs” (278). If this is not our convention, then the existence conditions for absences must be rather gerrymandered, and so a non-gerrymandered mode of being, more fundamental than being proper, would recognize absences whenever there are none of something. Now this, McDaniel says, can answer our question, for if nothing existed other than absences, then an awful lot of absences would exist!

I found this line of thought rather intriguing, but I think there’s a better way of spelling it out, and coming to the conclusion that there must be something. It should really be an argument by contradiction: Suppose nothing existed. Then, by the existence condition for absences, the absence of everyting would exist. But the absence of everything is something, in contradiction to our supposition.

Interestingly, absences of contingent things are themselves contingent, so we can run the same argument by contradiction from the supposition that there are no continent things. (Note: this is essentially why God, on a Leibniz-Ross type theory of omnipotence, cannot avoid actualizing some world: if he doesn’t create any contingent beings, he thereby actualizes the empty world, or, if one likes, the absence of all contingent beings.)

Furthermore, absences of concrete things have many of the markers of concreteness: they can be causes, they can be spatiotemporally located (‘the match failed to light due to the absence of oxygen in the chamber, at the time it was struck’), they can be perceived by the senses, and so forth. So perhaps absences of concrete things are themselves concrete, in which we can also use the argument to show that it is incoherent to suppose that there were no concrete beings.

Now, I think this argument is rather nifty, but one must admit that it has an air of sophism about it. McDaniel recognizes this, and he says that what the argument shows is that when interpreted as being about being proper, that is, the plain English sense of ‘is’, the question is not in fact a metaphysically deep or interesting one. Perhaps, then, there is a more interesting interpretation in terms of some other mode of being. Pursuant to this, McDaniel finds it necessary to introduce some considerations about the nature of modality.

McDaniel first considers the possibility of endorsing a form of modal realism, by which he apparently means not realism about merely possible entities (as in David Lewis) but rather realism about modality itself, that is, the view that modal concepts like possibility and actuality are fundamental. The way of spelling this out that McDaniel considers takes possibility and actuality to be distinct, fundamental modes of being. Each of these modes of being will be associated with a quantifier. However, McDaniel suggests, it may not make sense to put a modal operator in front of the possibilist quantifier. So, on this kind of modal realism, the question will have to be about actual being. The general point McDaniel is trying to make is that, if there is more than one equally fundamental quantifier, then it might be that modal operators do not make sense in connection with all of them. If that’s so, then the claim ‘possibly, nothing exists’ will not make sense in connection with those values of ‘exists’.

In the final section, McDaniel argues against modal realism. He argues that any property which applies to things that are less than fully real must be less than fully natural. This immediately entails that de re modal properties are not fully natural. Furthermore, if modal operators designate properties, then presumably de dicto modality is a matter of certain propositions having certain properties. It follows, on McDaniel’s view, that ‘why is there something rather than nothing?’ has a metaphysically deep interpretation only if propositions are fully real.

I found this paper quite interesting, but there is a fundamental assumption behind McDaniel’s whole discussion which doesn’t even get stated until the last page. This is the assumption that the question ‘why is there something rather than nothing?’ presupposes that possibly, nothing exists and is unintelligible, or at least ill-posed, if that assumption fails. But this is simply not true. It makes perfectly good sense to ask ‘why is the Incompleteness Theorem true?’ and one could even say, ‘true rather than false.’ Because this assumption is undefended (and, until the last page, unstated), the relevance of McDaniel’s essay to the issue at hand is left in doubt. Beyond this, I found McDaniel’s claim that there might be a quantifier to which we can’t intelligibly prefix modal operators quite questionable. I can see how there might be modes of being such that anything that possesses them possesses them necessarily, and I can see why possibilist being might be such a mode. But if that was true, then, on the possibilist interpretation, ‘possibly, there are talking donkeys’ and ‘necessarily, there are talking donkeys’ would both be trivially true and trivial truth is worlds away from unintelligibility. Nevertheless, McDaniel is certainly right about one thing: ontological pluralism opens up a whole new universe for possible reflection about why there is something rather than nothing.

This is, as I said, the last essay in the book. My next post will be some concluding reflections together with an index of my posts.

(Cross-posted at

Kotzen on the Improbability of Nothing
February 26, 2014 — 18:36

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , , , ,   Comments: 5

When someone asks ‘why p rather than q?’, it is sometimes a good answer to say, ‘p is far more probable than q.’ When someone asks, ‘why is p more probable than q?’, it is sometimes a good answer to say, ‘there are many more ways for p to be true than for q to be true.’ According to a well-known paper by Peter Van Inwagen, the question ‘why is there something rather than nothing?’ can be answered in just this fashion: something is far more probable than nothing, because there are infinitely many ways for there to be something, but there is only one way for there to be nothing. In his contribution to The Puzzle of Existence, Matthew Kotzen argues that, this sort of answer is only sometimes a good one, and that we cannot know a priori whether it is a good answer to the question of something rather than nothing.

Kotzen’s general line of response is a standard one: he argues that there are many possible measures, and not all of them assign probability 0 to the empty world. Van Inwagen is perfectly aware of this problem, but argues that a priori considerations allow us to select a natural measure. Kotzen’s strategy is to identify some everyday examples where this pattern of explanation looks good, and some where it looks bad, and show that van Inwagen’s a priori considerations don’t draw the line between good and bad in the right place. Furthermore, he argues (p. 228) that van Inwagen’s considerations may not actually be sufficient to assign unique probabilities in the relevant cases, since it is not always clear what space the measure should be assigned over.

I think Kotzen’s argument against van Inwagen is quite compelling. The best thing about Kotzen’s article, though, is that it does a great job explaining these complex issues at a moderate level of rigor and detail without assuming hardly any background. This would be a great article to assign to undergraduate students.

In the rest of this post, I’m going to do two things. First, I’m going to explain the issue about measures in a much lesser level of rigor and detail than Kotzen does, just to make sure we are all up to speed. Second, I am going to raise the question of whether van Inwagen’s argument might have an even bigger problem: whether, instead of too many equally eligible measures, there might be none.

The simplest, most familiar, cases where the probabilistic pattern of explanation with which we are concerned works are finite and discrete. This is the case, for instance, with dice rolls or coin flips. The coin either comes up heads or tails; each die shows one of its six faces. So then, as one learns in one’s very first introduction to probabilities, in the case of the dice roll, the probability of any particular proposition about that dice roll is the number of cases in which the proposition is true divided by the total number of possible cases (for two six-sided dice, 36). In real life, by dividing the outcomes into discrete cases like this, we care about certain factors (which face is up) and not about others (e.g., where on the table the dice land). This division into discrete cases is called a partition. The reason the probabilities are so simple in the dice case, with each case in the partition being equally likely, is because we chose a good partition. (Well, actually, it’s because a fair die is defined as one that makes each of those outcomes equally probable, but let’s ignore that for now and imagine that fair dice just occur in nature rather than being made by humans on purpose.) Suppose that, on one of our dice, the face with six dots is painted red rather than white and, for some reason, what we really care about is whether the red face is up. Well then we might partition the outcomes accordingly, into the red outcome and the non-red outcomes. But these two cases (red and non-red) are not equally probable.

Sometimes the thing we care about is not a discrete case like this, but a fundamentally continuous case like (in a standard example) where on a dartboard a perfectly thin dart lands. A measure is basically the equivalent, in this continuous case, of the partition in the discrete case. For the dart board, there is a natural measure, one that ‘just makes sense’, and this is provided by our ordinary spatial concepts. So if, for instance, the bullseye takes up 1/10 of the area of the dartboard then, if the dart is thrown randomly, it will have a 1/10 chance of landing there. (Again, this is really just what it means for the dart to be thrown randomly.) This isn’t the only possible measure, but it’s the one that, in some sense, ‘just makes sense.’ But the question is, is there a natural measure on the space of possible worlds? That is, is there some ‘correct’ or ‘sensible’ or ‘natural’ way of saying how ‘far apart’ two possible worlds are? This is far from clear. The Lewis-Stalnaker semantics for counterfactuals supposes that we can talk about some worlds being ‘closer together’ than others, but this is not enough to define a measure. Furthermore, Lewis, at least, thinks that the closeness of worlds might change based on contextual factors (which respects of similarity we most care about), so it seems like there’s a plurality of measures there. Perhaps one could claim that all of these reasonably natural measures agree in assigning nothing probability 0, but that’s not clear either. For instance, Leibniz seems to think that one reason why the existence of something cries out for explanation is that “a nothing is simpler and easier than a something” (“Principles of Nature and Grace,” tr. Woolhouse and Francks, sect. 7). So maybe we should adopt a measure in which worlds get lower probability the more complicated they are. (I think Swinburne might also have a view like this.) On this kind of view, the empty world (if there is such a world) will be the most probable world. So the plurality of measures seems like a problem.

It’s not the only problem, though. Kotzen notes that “the Lebesque measure can be defined only in spaces that can be represented as Euclidean n-dimensional real-valued spaces” (222). (The Lebesgue measure is the standard measure used, for instance, in the dart board case: the bigger space it takes up the bigger its measure.) But the space of possible worlds is not like this! David Lewis has argued that the cardinality of the space of possible worlds must be greater than the cardinality of the continuum (Plurality of Worlds, 118). The reason is relatively simple: suppose that it is possible that there should be a two-dimensional Euclidean space in which every point is either occupied or unoccupied. The set of possible patterns of occupied and unoccupied points in such a space (each representing a distinct possibility) will be larger than the continuum. But if this is right, then there can be no Lebesgue measure on the possible worlds because there are too many worlds. Even if this exact class of worlds is not really possible (for reasons such as the considerations about space in modern physics I raised last time) it seems likely that there are too many worlds for the space of possible worlds to have a Lebesgue measure. Yet Kotzen attributes to van Inwagen that view “that we ought to associate a proposition’s probability with its Lebesgue measure in the relevant space” (227).

Maybe van Inwagen is not in quite this much trouble. He doesn’t actually seem to say anything about a Lebesgue measure in the paper, so I’m not sure exactly why Kotzen thinks van Inwagen is committed to this. In fact, in the paper Kotzen is discussing, van Inwagen cites his earlier discussion in Daniel Howard-Snyder’s collection, The Evidential Argument from Evil. In endnote 3 (pp. 239-240) of that article, van Inwagen says “the notion of the measure of a set of worlds gets most of such content as it has from the intuitive notion of the proportion of logical psace that a set of worlds occupies.” I find it a little bit ironic that van Inwagen says this, because he’s always denying that he has intuitions about things! I don’t have intuitions about proportions of logical space. In any event, it seems to me that van Inwagen is here disavowing the project of giving a well-defined measure in the mathematician’s sense.

Suppose one did want to identify a natural measure that was well-defined in the mathematician’s sense. I’m not sure about all the technicalities of trying to do this for sets of larger-than-continuum cardinality, and whether it can be done at all. Even if it can, thought, it’s going to be hard to say that one measure is more intuitive or natural than another in such an exotic realm. Things might be even worse: Pruss thinks (PSR, p. 100) that, for any cardinality k, it is possible that there be k many photons. If this is true, then there is a proper class of possible worlds, and one certainly can’t define a measure on a proper class. (This is another thing I don’t think I have intuitions about.)

All this to say: anyone who wants to assign a priori probabilities to all propositions (as van Inwagen does) is fighting an uphill battle, but if such probabilities cannot be assigned, then it does not seem that the probabilistic pattern of explanation can be used to tell us why there is something rather than nothing.

(Cross-posted at

Rodriguez-Pereyra on Ontological Subtraction
February 24, 2014 — 22:41

Author: Kenny Pearce  Category: Existence of God Prosblogion Reviews  Tags: , , , ,   Comments: 2

Gonzalo Rodriguez-Pereyra’s contribution to The Puzzle of Existence is the last of a series of contributions on the question whether there might have been nothing. Rodriguez-Pereyra defends a version of the subtraction argument for metaphysical nihilism. That is, he argues (roughly) that for any concrete being and any possible world at which that being exists, the world obtained by subtracting that being from that world is likewise possible, and that it follows from this that there is an empty possible world. (The empty world is to be obtained by subtracting all of the concrete beings from some possible world with only finitely many such beings.)

This argument has already been defended (including by Rodriguez-Pereyra himself) in a number of places in the literature. The main aim of this new article is to defend the argument against the claim that it begs the question. The charge, which Rodriguez-Pereyra attributes to Alexander Paseau, is that, however the technical details work out (and there is a lot of concern about the technical details in this paper), the subtraction premise, in its general form, cannot be motivated in a way that is independent of metaphysical nihilism: insofar as we find it plausible that subtraction always results in a possible scenario, this must be because we find it plausible that there is an empty possible world.

The basic structure of Rodriguez-Pereyra’s response to this objection is as follows. A reasonable person who is unsure about metaphysical nihilism might well find subtraction plausible in a universe with an arbitrarily large finite population. That is, one might think that if a universe with exactly 67 concrete, contingent entities is possible, then so is a world which contains exactly 66 of those 67 entities, and nothing else. Furthermore, absent an independent argument against metaphysical nihilism, there is no good reason for supposing that the case of a world with only one entity is different from the case of a world with some arbitrary finite number of entities. Hence, unless there is some positive reason for thinking that the one-object world is a special case, we should accept metaphysical nihilism (i.e., the possibility that no concrete contingent beings exist).

This line of argument is, I take it, the core of the paper. It seems convincing to me, as far as it goes. That is, I think the argument might take a neutral, rational thinker who has certain prior beliefs/intuitions from a state of equippolence to a position of regarding nihilism as the default option, pending consideration of any further arguments anti-nihilists might offer. This is a pretty modest standard of success, but if we set the standards for success much higher than this then – let’s face it – there won’t be very many successful arguments in philosophy!

Anyway, my main worry is about something else. At the beginning of the paper, Rodriguez-Pereyra considers another line of objection to the argument, one I think is perhaps more important. This is the idea that it might be the case that, necessarily, if there are any concrete objects, there are infinitely many. Why might one think this? Well, Rodriguez-Pereyra can think of two reasons: first, one might think that, since space is (necessarily) infinitely divisible, it is necessary that every concrete object have infinitely many parts. Second, one might hold that sets whose ur-elements are concrete are themselves concrete, so that the existence of one concrete object generates infinitely many concrete sets.

Rodriguez-Pereyra’s strategy is to solve this problem by stipulation. He defines a concrete* object as one which is “(a) concrete, (b) non-set-constituted, and (c) a maximal occupant of a connected spatiotemporal region” (198). Condition (c) is a bit confusing. One might think that a maximal occupant of a connected spatiotemporal region is an object that takes up the whole region so as not to leave room for anything else that’s not a part of it, or something like that. This is not how Rodriguez-Pereyra defines this term. Rather, a maximal occupant of a connected spatiotemporal region is an object which exactly occupies a connected spatiotemporal region and is not a proper part of any object which occupies a connected spatiotemporal region. In other words, if such an object is a part of some larger whole, then that larger whole is a scattered (spatiotemporally disconnected) object. The argument then proceeds by subtracting concrete* objects with their parts.

Now here’s where I want to take issue. In order for the argument to work, Rodriguez-Pereyra now needs “a possible world with a finite domain of concrete* objects and in which every concrete object is a (proper or improper) part of a concrete* object” (200). Rodriguez-Pereyra thinks everyone will agree that there is such a world because concrete spatiotemporal objects which are not parts of concrete* objects are quite exotic (see 200n6), and Rodriguez-Pereyra says that he will “uncontroversially assume that it is a necessary condition of any object being concrete that it is spatiotemporal” (199). Now, it is currently popular among analytic metaphysicians to suppose that all actual concrete objects are spatiotemporal. This is because many analytic metaphysicians endorse relatively naive versions of physicalism. (As will appear below, the versions of physicalism in question are naive about physics; some of them are quite philosophically sophisticated.) On the other hand, though, there certainly are analytic metaphysicians who believe in non-spatiotemporal concrete objects. But second, and perhaps more importantly, it is extremely controversial to hold that being spatiotemporal is a necessary condition for concreteness, because many, perhaps most, analytic metaphysicians believe in the possibility of non-spatiotemporal concrete objects.

I can think of four reasons one might believe in the actual existence of non-spatiotemporal concrete beings. First, one might be a dualist about human persons and think that souls don’t count as spatiotemporal. Second, one might believe in one or more wholly immaterial persons, and one might think this person or these persons count as concrete. Third, one might think that some or all of the entities in fundamental physics are not actually spatiotemporal after all, but are nevertheless concrete. Fourth, one might be an idealist of some stripe or other (whether Berkeleian, Leibnizian, or Kantian) and deny that anything spatiotemporal could be ontologically fundamental, and therefore hold that there is some kind of non-spatiotemporal ‘ontological subbasement.’

Since it might be unclear to some people how option 3 goes, let me divide it into 5 sub-options (one could take more than one of these). These are just 5 things a philosopher informed about modern physics might say; I’m not necessarily endorsing them.

3a: because quantum particles are not extended and do not have precise locations, they don’t count as (what metaphysicians mean by) spatiotemporal.

3b: the wavefunctions of quantum mechanics are real concrete entities, but the mathematical spaces over which they are defined are not at all the same as physical space-time, so they should not be regarded as (literally) spatiotemporal (in the metaphysician’s sense). For instance, I’m told that the wavefunction for a two particle system is defined over a six-dimensional Hilbert space.

3c: Thinking of the particles as having vague locations is only one way of interpreting the wavefunction; on an alternative interpretation, one might think that quantum mechanics just tells us the probability of an observation event occurring in a given spacetime region. If this is right, then one might deny that the particles are located at all (only the observation events are located), and if they’re not located then they are certainly not spatiotemporal.

3d: if one of the ‘holographic’ theories in fundamental physics is true, then ordinary physical spacetime (the spacetime we move around in) isn’t even physically fundamental, so there must be more fundamental concrete stuff which is not located in our spacetime, and hence we might regard that stuff as (in some sense) non-spatiotemporal).

3e: the laws of nature are concrete non-spatiotemporal entities.

(These possibilities are the reasons why I said above that it was somewhat naive to think that physicalism entailed the non-existence of non-spatiotemporal concrete things. The entailment is at best non-obvious and at worst non-existent, but it is sometimes taken as practically definitional.)

These are examples of reasons you might believe in the actual existence of concrete non-spatiotemporal objects. But all we need for Rodriguez-Pereyra’s assumption to be false is the possibility of concrete non-spatiotemporal entities. Here, we are on even safer ground, for a great many philosophers are willing to admit the possibility of some or all of the concrete non-spatiotemporal entities mentioned above, even if they don’t think any of them are actual.

Where does this leave Rodriguez-Pereyra’s argument? Well, Rodriguez-Pereyra doesn’t actually need the premise that necessarily all concrete objects are spatiotemporal. What he needs is just the claim that there is a possible world at which there are finitely many concrete* objects and every concrete object is a part of a concrete* object. The proponents of most of the positions mentioned above will be willing to admit the existence of possible worlds at which all of the concrete objects are spatiotemporal. There are, however, two exceptions.

First, some philosophers believe in the necessary existence of an immaterial God whom they consider to be a concrete object. This is easily sidestepped by restricting the argument to contingent objects. Of course this weakens the conclusion to the claim that there is a possible world at which there are no contingent concrete objects, but that’s close enough.

The more problematic case is the case of those who think that all spatiotemporal objects are non-fundamental (case 4 and some variants of case 3). These philosophers might think that this is necessarily the case, that nothing that is literally spatiotemporal could possibly be fundamental. If this is right, then there is no possible world of the sort Rodriguez-Pereyra needs.

The obvious way to fix this would be to talk about taking away the concrete* objects along with, not only their parts, but also their ontological grounds. However, absent some kind of theory of the ontological grounding of such objects, this renders the subtraction principle quite doubtful. If the concrete* objects have unknown grounds, then why should we think the objects are independent of each other? They might, for instance, be grounded in the same fundamental reality.

Rodriguez-Pereyra’s argument relies on a picture of a world as a four-dimensional spacetime with filled and unfilled regions, and essentially nothing more to it. As a picture of the actual world, this is quite naive, but Rodriguez-Pereyra only needs it to be a picture of a class of possible worlds. The possibility of such worlds enjoys a certain amount of plausibility (they certainly seem conceivable, for instance). However, there are arguments to be made against such possibilities. Here, I have merely gestured at (and not endorsed) these arguments, but I want to point out that if any of them succeeds then Rodriguez-Pereyra’s defense of the subtraction argument fails.

(Cross-posted at

How to Determine Whether there Might Have Been Nothing
January 27, 2014 — 21:40

Author: Kenny Pearce  Category: Prosblogion Reviews  Tags: , , , , , , , ,   Comments: 9

Even those of us who think that necessary truths often need (and have) non-trivial explanations generally think that these explanations tend to look different from the explanations of contingent truths. Furthermore, one might well think that showing that p is necessary explains why p, even if one thinks that it is possible to show that necessarily p without explaining why necessarily p. Additionally, of course, there are those who hold that once one has shown a certain proposition to be a necessary truth, there are no further ‘why’ questions to be asked. Thus if one wants to know whether the question ‘why is there something rather than nothing?’ is a well-formed question that might have an answer, and what form such an answer might take, one would do well to start by determining whether it is possible that there should be nothing. In their contribution to The Puzzle of Existence, David Efird and Tom Stoneham set out to explain how one might go about making such a determination.

For starters, why would anyone doubt that there could be nothing? Well, for one thing, as is well known, standard logics actually turn out to entail that something exists. The argument goes like this. It is an axiom of first-order predicate logic with identity that:


But by universal instantiation we then have:


where ‘a’ is an arbitrary constant. By existential generalization this gives us:


which is how you say that something exists. But by the necessitation rule of K (and stronger modal logics, such as S4 and S5), we now have:


i.e., necessarily, something exists. (Note that in standard systems this is not equivalent to something necessarily exists.)

Now, Efird and Stoneham understand the ‘puzzle of existence’ as the question of why there are concrete things rather than not, so their question is whether it is possible that there should be no concrete things. I can think of three ways of responding to this argument without conceding that necessarily there is something concrete:

  1. Revisionism. The argument above shows that there is something wrong with standard logics. We should endorse a free logic (or something else) instead.
  2. Platonist Abstractionism. The argument is perfectly sound, since the empty set (for instance) could not have failed to exist, but this sheds no light on the question of whether there might not have been any concrete things.
  3. Nominalist Abstractionism. There are many things different types of nominalists might say, and all of them are going to be complicated. Here’s the beginning of one possible nominalist response: ‘∃x(φ(x))’ does not in fact mean ‘something that satisfies φ exists’, for the formal quantifiers are most at home in mathematics, and it is certainly a truth of mathematics that ∃x(x > 5). This claim is true despite the fact that numbers do not exist, and hence nothing that is greater than 5 exists (since only a number could be greater than 5). As a result, the argument’s conclusion, □∃x(x=x), is not correctly interpreted as saying that, necessarily, something exists.

There are other reasons one might deny the possibility of nothing. For instance, one might doubt whether it is genuinely conceivable that there should be no concrete entities, or one might have a metaphysical theory of possible worlds which rules out the existence of an empty world. On the other hand, most of us find it to be at least apparently conceivable that there should be no concrete entities. Perhaps on further reflection I will think that I was really conceiving empty space, and space is a concrete entity, or, worse, that I was conceiving myself floating in empty space. (Depending on exactly what’s meant by ‘conceive,’ I’m not sure I can conceive my own non-existence; if I can’t, then clearly inconceivability is poor evidence of impossibility.) Alternatively, one might say: other than the above argument and ontological arguments for the existence of God, both of which many philosophers suspect of being sophistical and/or mere logical curiosities, no one has identified any contradiction in the proposition there are no concrete beings, hence it should be regarded as possible. (I think this last line of thought is not very compelling, since I think the reason most philosophers are so confident that there must be something wrong with these arguments is a prior conviction that it is possible that there should be nothing.)

To sum up what I’ve said so far: if we are interested in the question why there is something rather than nothing, then it will be important to us to answer the question whether there might have been nothing. The answer to this last question is, however, far from clear. Efird and Stoneham’s paper is an attempt to say how we might go about answering it.

Efird and Stoneham defend two main claims about this, which they label Methodological Separatism and Modal Pluralism. Methodological Separatism is the view that theories about the nature of possibility (e.g., a metaphysics of possible worlds) and theories about the extent of possibility (i.e., what things are possible) can and should be evaluated separately. This is not to deny that we may need to consider how theories of the nature of possibility interact with theories of the extent of possibility; rather, the idea is that we should first evaluate them separately and only later consider whether, for instance, a particular theory of the nature of possibility has the cost of foreclosing on our best theory of the extent of possibility, or other things of that sort.

Modal Pluralism is the view that a theory of the extent of possibility should consist of a series of claims of the form: if p, then ◊q. This is supposed to lead to a ‘regulative principle’ Efird and Stoneham call ‘Leibniz’s Principle of the Presumption of Possibility’:

One has the right to assume ◊p until someone proves the contrary. (pp. 162-163)

This regulative principle is supposed to follow because it is always (epistemically) possible that there is a principle one has missed (i.e., some other criterion of possibility not on one’s list.)

Let me pause to make a few remarks on this. The assumption here is that p is possible, not that p is contingent. It is crucial for Leibniz’s purposes that this be so, for Leibniz is running an ontological argument for the existence of God: he wants to presume that God possibly exists and show from this that God necessarily exits, and so actually exists. Clearly a presumption of contingency won’t do here. But in fact the presumption of contingency is rather more plausible than the presumption of possibility, as Robert Adams has argued. In fact, John Heil and C. B. Martin, whose criticisms Efird and Stoneham discuss, are actually criticizing the more plausible presumption of contingency, not the less plausible presumption of possibility. I find it odd that Efird and Stoneham do not remark on this important difference. Perhaps the reason for this is that the presumption of contingency looks like a strict strengthening of the presumption of possibility, since ‘p is contingent’ is written in modal logic ‘◊p & ◊~p.’ Nevertheless, there is this important difference: it is possible to have evidence against the contingency of a proposition without having any evidence that bears on the question whether it is necessarily true or necessarily false. In this scenario, the presumption of contingency would be rebutted, but the presumption of possibility would not. Thus if one were entitled to presume possibility, rather than contingency, one would be entitled, in such a scenario, to infer that the proposition was necessarily true. But here things get all screwy, since one could just as easily have started from the negation of the proposition one in fact started from, and presumed that proposition to be possible, in which case one would come to the opposite result. This seems bad.

In any event, Efird and Stoneham report that in other work they defend certain principles of possibility which entail that it is possible that there are no concrete beings. The main focus of this paper, is, however, on the methodological point about how this is done. The answer is that one is entitled to presume the possibility of nothing right off the bat. If one wants to provide a positive argument for the possibility of nothing what one needs to do is to identify a plausible principle of possibility which entails it.

I have two questions about Efird and Stoneham’s approach. First, I am unclear on what the justification for the asymmetry between possibility and necessity is supposed to be. Second, I’m not totally sure what they mean by ‘theory.’

Taking the first issue first, Efird and Stoneham’s approach sure seems to stack the deck in favor of the possibility of nothing. This is a result of their claim that a modal theory should contain a plurality of principles of possibility. Their argument for this claim is that no one has succeeded in identifying a single criterion of possibility that, all by itself, captures enough of our modal intuitions. But what seems to me to be left unjustified is their focus on principles of possibility rather than principles of necessity. After all, we have intuitions about necessity as well. In fact, Efird and Stoneham seem willing to admit at least two principle of necessity: everything is necessarily self-identical, and every proposition knowable a priori is a necessary truth (pp. 161-162; I say they ‘seem willing to admit’ these principles, although they do not unambiguously endorse them). But if there is a plurality of principles of necessity, then why presume possibility rather than necessity? After all, we might have overlooked a principle of necessity just as easily as a principle of possibility!

On the second point, Efird and Stoneham say, “we can regard any assertion or belief which organizes some data, usually by categorizing it or deriving it from some variables, as theoretical with respect to that data” (p. 146). They are quite clear that the theory/data distinction is a relative one. But, oddly, they say they are going to define ‘theory’ and then instead tell us what it means for an assertion or belief to be theoretical. Perhaps a theory of some data is the set of beliefs one holds which are theoretical with respect to that data. Now, it seems to me that they want these beliefs to be justified by their use to ‘organize’ the data. Is this justification an inference to the best explanation? If so, then this ‘organization’ must be some kind of explanation by unification. Yet it is far from clear that a theory of the extent of possibility of the sort Efird and Stoneham envision explains anything. Certainly one can explain an individual modal intuition by appealing to some more general principle that entails it, but would a hodgepodge of such general principles, with nothing to tie them together, really count as an explanation of our modal intuitions in general? I’m dubious.

(Cross-posted at