Libertarianism
January 31, 2009 — 9:09

Author: Alexander Pruss  Category: Free Will  Comments: 2

Dennett (see Vallicella’s discussion here) discusses an argument rather like the following, and criticizes it for being like an argument starting with the assumptions that every mammal has a mammal for a mother, and there was a finite number of mammals. But nonetheless, the argument strikes me very plausible:

  1. If E is a mental state or decision that I am responsible for to any degree, then either I, as libertarian cause, am among E’s causes, or else a mental state or decision that I am to some (perhaps different) degree responsible for is among E’s causes, or both.
  2. I have had only finitely many mental states and have made only finitely many decisions.
  3. Nothing is a cause of itself, and there are no causal circles.
  4. Therefore, if I am responsible for any mental state or decision, I have engaged in libertarian causation.

(Here, I understand “libertarian causation” as agent causation or any reasonably similar libertarian substitute, such as Kane’s.)

One could try to get out of the argument by positing an infinite number of past mental states and/or decisions. I think that would not be plausible, not just because of the implausibility of the infinitary posit, but because it wouldn’t get at the heart of the worry.

I think the mammal parallel is predicated on a common and serious mistake in philosophical thinking: that if you push an explanation far enough back, the problem will go away. One of the pressing philosophical problems for the presocratics was why the earth doesn’t fall down the way all other massive objects do. One author had the brilliant idea that the earth stands on long pillars. Now, you might think that doesn’t help–one will then ask what holds up the pillars. But the author’s delightful solution was that the bottom of the pillars was shrouded in mist.

Dennett’s idea is that as you go back in the chain of causes, you get to states of lesser and lesser responsibility, and finally the problem disappears, presumably in a blur of vagueness. But that, I think, is mistaken. For while responsibility can differ in degree, there surely is a distinction between things that one is not at all responsible for (such as the extinction of the dinosaurs) and things that one is at least somewhat responsible for (such as the fact that I am now breathing at this very moment–I could, after all, stop for a minute or two). It is not so, at least on standard evolutionary views, for mammality–the difference there can be plausibly taken to be vague.

But there is a second kind of criticism that one can make of the Dennett idea, and it is a strengthening of the intuition in (1). It seems to me that when I am responsible for something, and I do not agent-cause that for which I am responsible, then my responsibility for the causes should in the aggregate not be of lesser degree than my responsibility for the effect. Suppose that I am forced at gunpoint to rob a store. Then I have a degree of responsibility for the robbery (I could have chosen to die instead). But my degree of responsibility is diminished. It seems clear that any further things–including my own mental states–that the robbery deterministically causes are ones for which I have a likewise diminished degree of responsibility (suppose that engaging in the robbery deterministically causes me to have less respect for property; then I am no more responsible for that decrease than for the robbery, assuming I wasn’t responsible for the tendencies that made the one thing cause the other), unless some other causal stream injects more responsibility. I know that Dennett (and, on the other side, Kane) dispute this intuition, but it seems pretty plausible, though it is very hard to formulate it exactly. (E.g., I might be somewhat responsible for a lot of mental states, but more responsible for their disjunction.) And if I am right, then the case is far from the mammal case. For while there is no problem according to evolutionary theory about something more mammalian arising out of something less mammalian, there is a problem about an increase in responsibility.

Comments:
  • Heath White

    Strictly speaking, Galen Strawson (“The Impossibility of Moral Responsibility”) would agree with your conclusion.

    January 31, 2009 — 19:19
  • And he’d think the argument is sound. 🙂
    Here’s an argument for (1). We can tell stories about cases where a person’s decision is determined by prior features of the person’s character combined with external circumstance but the person is nevertheless responsible for the choice. But these are, invariably, stories where those features of character are ones the person is responsible for having.
    I still think that one of the most plausible accounts for how responsibility can be compatible with determinism is in Hume’s Enquiry. But it is central to that account that Hume endorses the claim that we punish people for the vicious character which is revealed in their wrongful actions. This is only plausible insofar as we can appropriately hold the person responsible for her vicious character, and of course we can only appropriately hold x responsible for S if x is responsible for S.
    Here’s another argument for (1).
    i. If E is bad for x, and x is not identical with an agent cause of E or responsible for at least one of the causes of E, then E is a mere misfortune for x. (Premise)
    ii. An evil decision that one is responsible for is not a mere misfortune for x. (Premise)
    iii. An evil decision is bad for the decision maker. (Premise)
    iv. Therefore, (1) holds in the case of evil decisions.
    v. If (1) holds in the case of evil decisions, it holds in general. (Premise)
    vi. Therefore, (1) holds in general.

    February 2, 2009 — 14:02