Can Robots Think?
April 19, 2008 — 10:58

Author: James Beebe  Category: Books of Interest  Comments: 32

I just received a notice from Blackwell about the new book in the Great Debates series featuring a debate between Alvin Plantinga and Michael Tooley on Knowledge of God. I noticed that one chapter by Plantinga is called “Can Robots Think? A Reply to Tooley’s Second Statement.”
I have two questions: (1) Can someone tell me what Plantinga’s position on artificial thinking is? (2) Can someone give me any good reason why robots will not be able to think in the future?
Theists in general are quite hostile to the possibility of genuine artificial intelligence, but I have yet to hear a good reason why. Suppose that substance dualism is true. This means that you and I do our thinking with a non-physical mind/soul. The fact that we do our thinking with a non-physical mind/soul doesn’t show that thinking can only be done with a mind/soul. Compare: The fact that birds do their flying with feathered wings does not mean that feathered wings are required for flying. Helicopters, plants, rockets, etc. fly without feathered wings. So, I can’t see why the truth of dualism would preclude AI. And I’m not sure what other good reasons there are.
James

Comments:
  • James Beebe

    The comments link for this post was initially broken, but I think I have fixed it now. Please commence commenting.
    James

    April 19, 2008 — 21:03
  • Josh B.

    I believe I can give you an answer to 2 – robots will not be able to think in the future because robots (as we now develop them) are digital computers, and the digital computer cannot, in principle, think (not as material thing, but as digital computer). Hubert Dreyfus’s book What Computers Still Can’t Do fleshes out this argument in rigorous detail.
    It’s worthy of mention that this debate about digital computers has nothing to do with materialism vs. immaterialism – for Dreyfus and others, it’s all about whether the digital computer is a good model for the human mind. One of his most interesting complaints is that artificial intelligence as it’s been historically practiced hasn’t been materialistic enough – that is, they haven’t considered how the really important thing in human thinking is the human body.
    So is AI possible? Sure, probably. Just not with digital computers. Maybe the next technology shift will come up with the appropriate model for getting AI to work.

    April 19, 2008 — 21:13
  • James Beebe

    I should clarify my question. I know the Chinese Room argument and the Dreyfus stuff. (Dreyfus, of course, admits that while serial digital computers may be bad models for the human mind, parallel distributed information processors may not be. So, his argument doesn’t ultimately show that there can be no AI.)
    What I’m really trying to express is my puzzlement as to why *theists* should be so overwhelming opposed to AI.
    James

    April 19, 2008 — 21:18
  • Jonathan

    James,
    I am a bit confused here. Are theists “opposed to the idea of AI” – meaning, they would oppose attempts to create “thinking” machines, etc. or “opposed to the idea of AI” meaning that they genuinely think that it’s impossible, and will never come about?
    –Jonathan

    April 20, 2008 — 9:36
  • James, I can offer a reason why substance dualists would be opposed to the possibility of AI. If you are a substance dualist, not only do you think substance dualism can be true, you think it can be known to be true. How would we know that substance dualism is true? I suppose it is conceivable that one might come to know that it is true by the eventual failure of every attempt to produce a machine that replicates human thought, or by divine revelation, but anyone who holds that we know now that substance dualism is true, and that this is based on philosophical argumentation rather than on faith, is likely to hold that we can know a priori that a purely material object cannot think. This is not part of the content of substance dualism, but it is part of the means that most substance dualists are likely to use to demonstrate their position. Consequently, they will also be committed to thinking that AI is impossible.
    The connection between substance dualism and theism is also an interesting matter. I think this is partly because most people, Christians or not, think that there are two alternatives: substance dualism and materialism. Most Christians believe that humanity is created in God’s image, that we will survive our death, and that we will enjoy this post-mortem survival while our bodies are rotting. This implies that materialism is false, and when they hear about substance dualism, they naturally say that this corresponds to what they already believe. (This at least has been my experience in the classroom). Of course, I’m not saying that philosophers Taliaferro and Swinburne choose substance dualism for reasons as simplistic as this, I’m only explaining the general perception that substance dualism is the Christian position.
    Personally, I think hylomorphism is the best theory about mind and body. Consequently, I have no a priori reason to reject the possibility of artificial intelligence. At the same time, there is a long tradition of Christian hylomorphism that addresses the issue of post-mortem survival in a disembodied state.
    As a matter of fact, article 365 of the Catechism of the Catholic Church states that “The unity of the soul and body is so profound that one has to consider the soul to be the ‘form’ of the body, i.e. it is because of its spiritual soul that the body made of matter becomes a living human body; spirit and matter, in man, are not two natures united, but rather their union forms a single nature.” If anything, that creates problems for substance dualists. (I’m not saying that a substance dualist couldn’t accept this proposition, only that it would require a nuanced statement of substance dualism. Also, I know that the Catechism is not accepted as a normative source by all Christians – comparisons with similar sources from other denominations would be interesting).
    Despite this, I find that some students have a tendency to see hylomorphism as, at best, a compromise between godless materialism and the truly religious position of substance dualism.

    April 20, 2008 — 13:27
  • James,
    I think you’ll find an answer to question (1) in Plantinga’s article ‘Against Materialism’, Faith and Philosophy 23:1 (2006).
    Plantinga gives two arguments against a materialist view of human beings. The second argument boils down to this: thoughts have content (i.e., intentionality) but no material objects have content (in that sense); therefore, we are not material objects. Insofar as computers can be said to have content, this is only content in a derived sense.
    Agree or not, clearly this argument has more to its credit than “substance dualism is true, so strong AI must be false.”
    This may or may not give you an answer to question (2) as well. 🙂
    James A

    April 20, 2008 — 14:43
  • Heath White

    I think, like the other James, that the decent arguments against the possibility of machine intelligence are of the form:
    Thinking requires intentionality.
    Nothing material has (non-derivative) intentionality.
    So nothing material can think.
    or
    Thinking requires consciousness.
    Nothing material can be conscious.
    So nothing material can think.
    There is a traditional argument, from Aquinas I think, for the claim that nothing material has intentionality. The argument in rough form is that intentionality requires concepts which are universals; but all material things are particulars; so nothing material can have intentionality. I have never studied a detailed version of this argument so don’t feel able to comment on its merits. I have never seen a version of it in a philosophy of mind text, though, or in a contemporary analytic journal.

    April 20, 2008 — 19:37
  • James A., thanks for the reference to the Plantinga article on materialism. Thanks to everyone else for their comments as well.
    Some replies. Since dualism is a thesis about humans, I can’t see that there is any implication from it concerning non-human intelligence.
    Also, if the only arguments theists can give for the impossibility of AI are like “thoughts have content (i.e., intentionality) but no material objects have content…” then I will continue to think that theists don’t have good reasons to be skeptical about AI. The argument presumes that no form of naturalistic semantics is true. That may be right, but that’s a slender reed upon which to base one’s skepticism about AI. I also suspect most theists who are skeptical about AI haven’t taken the time to come to any considered opinions about the naturalistic semantic theories of Dretske, Millikan, Fodor, and other defenders of causal theories of reference. So, the alleged failure of naturalistic semantics can’t really be what motivates them to be skeptical about AI.
    There is even a question as to whether the foregoing arguments against the possibility of AI presuppose not just the falsity of all actual and possible naturalistic semantics but also all forms of semantic externalism, which is the current orthodoxy in phil of mind and language.
    Consider Lynn Andrea Stein’s robot Toto, which she worked on at MIT. It is basically a trash can on wheels with twelve cameras around its tin head. It is programmed with a map-making ability and also a semantically important ability to name conspicuous geographic features of its environment. It is not told what the floor plan of a given building is. It is not told what the names of geographically prominent features are. It goes out and find those features and then names them. Toto stores a map of its surroundings in its memory and can answer questions about it.
    I don’t think that Toto or any currently existing robot can actually think. But here’s a very interesting question: After drawing a map and designating a particular end of a hallway as ‘A,’ does the symbol ‘A’ in Toto’s tin head refer to a particular place. It seems that it does. If, as many semantic theorists claim, reference is the basic semantic property, I believe that Toto proto-refers to a particular hallway. I don’t think Toto full-blown-refers because Toto isn’t “cognitively” sophisticated enough.
    The idea that material things can’t have intentionality, etc., presumes not merely that they don’t but that no amount of the right kinds of causal interaction between them and the world could make for reference. It seems to me that some more sophisticated successor to Toto might be able to full-blown-refer to the external world. In any case, I don’t see that theists have good reasons for rejecting this possibility.
    James

    April 21, 2008 — 9:01
  • I can’t improve on the above, but having now become intrigued by the question of why theists and AI don’t mix well, here’s my clumsy thoughts on Can someone give me any good reason why robots will not be able to think in the future?
    What is thinking? Responsibly acting on the known contents of propositions? Plausibly something like that; but then the AI robot has to have an inner life, which has to materially affect its thoughts, for it to be thinking. So consider looking at its material structure (which is surely possible since we constructed it in the first place). The action caused by its thinking cannot be random (as that would not be thinking) so it must be determined. It must be that the thinking is epiphenomenal to the material structure.
    Now, there are problems with epiphenomenalism (recently much discussed on many blogs, e.g. this), but my real problem with it (the problem that I intuit, and which other theists may easily intuit too) is as follows. What is it that gives rise to such epiphenomenal thoughts? The material or the structure? If the material, we have a lot of evidence (from chemistry et al) that matter is just not like that. For matter to be like that would be weirder than dualism! But also, then AI is unlikely (since chips are so unlike brain cells materially, or substructurally). So, is it the structure?
    But all such materially instantiated structures (viewed as spatio-temporal structures, since time is fundamentally just another physical dimension, if there is nothing like spiritual free will) are structurally isomorphic to sub-structures of the infinitely complex (and complete and symmetric and hence apparently simple) structures that are instantiated everywhere, if spacetime is a standard continuum; or of the extremely complex structures that are all over the place anyway even if not (e.g. the spatio-temporal quantum mechanical interconnections of the particles within rocks and such). So we then have a bizarre version of panpsychism, which theists would not normally be able to see the point of God creating, and then not informing us about (although atheists may have no problem with this, and may indeed be led towards asserting that it is plausible eventually).
    Basically, one just intuits that, from all one knows of the material of this world, thinking things must be of another kind entirely; and whatever arguments are put up against that intuition, or whatever demands for clarification are made, we are no more likely to reject that intuition than we are to reject all our knowledge claims because of, say, the possible BIVs. Theists have no problem with such other kinds, atheists do (if only the loss of a fundamental monism, such as is often confused with theistic Cartesian dualism, and is then often seen as a good reason for rejecting the latter).
    Consequently, while most atheists have to accept a real possibility of AI (and then act like such is only natural), theists can say that, while AI is possible in some possible world (it is a lot like a fair enough view of our own creation by God), nonetheless it is prima facie unlikely in this world (where God created Man by breathing into the clay of this world, and where we have no such way of breathing into our robots). And it is an unlikelihood of the sort that in ordinary language is called impossible.

    April 21, 2008 — 9:04
  • I’ve always thought that one of the consequences of Hasker’s Emergent Dualism made AI not only possible but perhaps even likely. If the thing that thinks was a consequence of natural materials being arranged in a certain amount of complexity we have decent reason to think that we could recreate this latent mechanism in nature. Which makes us potentially the creators of immaterial selves simply by building robots; I find that sort of weird.
    Hasker’s account goes between the horns. He agrees with Plantinga that a material object can’t think and that an immaterial object can. But he would add that immaterial objects (at least human ones) are a result of material objects being arranged in a certain complex way. So while the robot can’t think it can give rise to something that can. Again this seems weird to me.

    April 21, 2008 — 11:17
  • James, you say that you will “will continue to think that theists don’t have good reasons to be skeptical about AI.” Is your point
    (i) that you don’t think there are good reasons to be sceptical of AI, so there are no good reasons for theists to be sceptical of AI;
    (ii) that if there are good reasons to be sceptical of AI, they are only known to those who have studied the arguments of Dretske etc. carefully, and most theists have not studied those arguments carefully, so most theists have no reason to be sceptical of AI;
    (iii) that a commitment to theism may involve a commitment to substance dualism, but such a commitment would not provide grounds for rejecting AI, therefore theists qua theists have no good reason to reject AI.
    To put the point another way: Plantinga, Taliaferro and Swinburne are all theists, substance dualists, and full-time philosophers. They would all hold, I think, that their arguments for substance dualism are independent of their arguments for theism. They would also hold, I think, that there is some proposition, P, such that P implies that substance dualism is true, and that AI is impossible. P would be something like ‘A purely material object cannot think.’
    Of course, one man’s ponens is another woman’s tollens: if P implies AI is impossible, then evidence for the possibility of AI is evidence against the truth of P, and if dualism is derived from P, evidence for the possibility of AI would also be evidence against dualism. So a defender of P had better have something to say about why evidence of the possibility of AI is illusory. A lack of response to Dretske, Millikan etc. would be a failing in a professional philosopher advancing a dualism derived from P. If this is your point, then a good response would involve a more detailed discussion of naturalistic semantics and its alleged failures.
    Alternatively, there are many theists who are not full-time philosophers. They believe in life after death as part of their whole religious package, and may not even be aware of detailed philosophical discussions of personal identity. Granted, let us say, substance dualism is part of their whole religious package, it doesn’t seem that their substance dualism is derived from some P that implies AI is impossible. So, why should they reject AI?
    Here’s an answer. Such theists will probably say that it is because humans possess a soul that we are made in the image of God, and it is because we possess a soul that we are capable of moral reflection. Furthermore, when God gave us souls, he set us apart from the rest of creation: this was a miraculous feat, something only God can do. If a human made a computer capable of moral reflection and agency, then it would seem that humans have performed an act that, according to the theistic story about humans, only God could perform. A thinking robot would surely be capable of moral reflection, and thus its existence would undermine the theistic story about humans. That, or something like that, is perhaps the motivation that you are wondering about.

    April 21, 2008 — 12:18
  • James Beebe

    Ben,
    I think your explanation of why average theists are skeptical of AI is a very good one. Moral agency has to be part of the story. Thanks for those reflections.
    As for your (i) through (iii) above, I suppose I was thinking of all of them, without distinguishing clearly between them.
    Common arguments against AI like the following won’t hold much water for those theists attracted to some form of hylomorphism:
    1. Thinking requires intentionality.
    2. Nothing material has (non-derivative) intentionality.
    3. So, nothing material can think.
    Nothing that is only or purely material (i.e., that is uninformed) has intentionality because pure, uninformed matter is (according to the hylomorphic story) nothing at all. It’s pure potentiality.
    So, if some matter-form composites can think and some others cannot, the ultimate argument against AI is going to have something to do with the nature of the forms that are configuring the matter in question. It may also have something to do with the intrinsic capacities of the type of matter we are dealing with as well.
    But I’m not sure how the hylomorphic argument against AI is supposed to go. If robots will never be able to think, it will be because it is impossible for them to have the right form configuring their matter. But what reason is there to suppose that there could never be such a form or that we could never construct a robot with such a form? Any suggestions from hylomorphists would be helpful and interesting.
    Thanks,
    James

    April 21, 2008 — 13:11
  • robert allen

    Ben,
    Hylomorphism does not entail that our bodies have anything to do with thinking. It merely says the body that is dependent upon the soul for its life. They are a unit, to be sure- a point which even Descartes emphasizes in one of his letters to Elizabeth- but that does not tell against the possibility of disembodied thinking. The Almighty does not just cobble together a person’s body and soul. But that He united them- even to form a “single nature”- entails their separability. So I don’t see how the Catechism affords conceptual hope to those who take thinking robots to be possible. Even if the Almighty were to ensoul a robot, that would not mean that its body, which is what we are currently trying to conceive of as conscious, would think. 365 is only meant to forestall the Manichæistic idea that our bodies are to be “despised.”
    Happy St. Anselm’s Day to all!

    April 21, 2008 — 13:37
  • Dear Robert – how appropriate that I am grading papers on the Ontological Argument on St. Anselm’s Day (for the record, one out of about ten students thinks the argument is sound).
    I’d agree with you that Article 325 of the Catechism does not entail that our bodies have something to do with thinking, and I certainly don’t think it was intended to hold out conceptual hope to those who think robots will be able to think. However, Article 325 is not a definition of hylomorphism – it uses the language of hylomorphism without specifically endorsing Aristotle’s theory, and that leaves it open for Catholic philosophers to decide how much of Aristotle’s theory they want to absorb, and to consider the implications of that theory.
    It seems to me that, if one takes Aristotle seriously, then our bodies have a lot to do with thinking, and I also think it leaves open the possibility of a thinking robot. I ask myself the following questions:
    (1) Is it physically impossible to build something that replicates the dynamic structure of the human brain – by ‘dynamic structure’ I mean for every state of the human brain, there is a corresponding state in the replicant, for every interaction of the human brain with a body and environment, there is a corresponding interaction of the replicant, and this relationship holds over time. (Sorry, this definition is a little hasty: grading papers on the Ontological Argument is tiring work).
    This surely is a question for scientists rather than philosophers and theologians. I can’t see anything in the philosophical theory of hylomorphism that would prevent one from answering yes, although I understand the theological motivations that might lead someone to prefer a negative answer.
    (2) If a scientist completed the task described above successfully, would the result fail to replicate the form of a human being, and thus to have a soul? I think that on philosophical grounds, the hylomorphist should answer ‘yes’.
    I think these thoughts are neither required by nor prohibited by the Catechism, but so many times in the past, advances in science and technology have taken Christians by surprise. It might be as well if theologians are conceptually prepared for thinking robots, in case one day we encounter one on the street.

    April 21, 2008 — 20:58
  • James Beebe

    Ben,
    I like the distinction you’ve drawn in (1) and (2). And what you say about them seems to be precisely what I would have liked to have said about the matter. I’ll have to remember the distinction for future discussions. Thanks for your input.
    James

    April 21, 2008 — 21:29
  • But Ben, why should answering (1) be a job for scientists? The question is, is it physically impossible? and if the proper functioning of the human brain requires inputs from a soul, then the answer is no. Even if one could replicate the dynamic functioning of brain cells, and connect them up as in a human brain, one would not get a fully functioning brain because there would be something vital missing. That answer cannot be supplied by scientists at present; and according to that answer, if scientists ever do so replicate a human brain it will fail to function well enough to, for example, write papers on epiphenomenalism (or, I’d guess, learn our language in the first place). It would be extremely interesting to see what it could replicate, of course, but such science must await such advances as will make it possible; and nonetheless, why should answering (1) be a question for scientists?

    April 22, 2008 — 12:28
  • Enigman: suppose that the answer to question (1) is ‘No’. In that case, scientists will eventually discover that the answer is ‘No’. And I don’t think that would require them to create an exact physical replica of a human brain and see how it functions. Rather, as research into the brain continues, they will reach a point where they say ‘The behaviour of the brain at this point is inexplicable based on it’s physical properties: it acts as though it is responding to signals that it receives from another source, but there doesn’t seem to be any way that we could make something capable of sending such signals. If we were to build an exact physical replica of this part of the brain, it wouldn’t function in the same way unless those signals came out of nowhere.’
    If the answer is ‘Yes’, if the dynamic function of the brain can be replicated, that too is something scientists should eventually be able to discover.
    Now, I’m aware that Popper and Eccles collaborated on a dualist-interactionist view of mind-brain relations. I can see how a strong case could be made out that the answer to (1) is bound to be ‘No’ given a well-founded theory about what human minds actually accomplish, and what the physical matter that composes the brain is capable of. Since our intuitions about the behaviour of physical matter have so often proved to be wrong in the past, such a theory would have to be based on credible scientific study of what physical systems can and cannot do, which is why scientific involvement is important.
    As a hylomorphist, I think that every physical entity, not just the human body, has form as well as matter. It is just that the form of the human body is special, enabling us to receive and manipulate many forms. A substance dualist holds that humans have a non-physical part, a soul, but other objects are entirely physical. I think that substance dualism commits one to answering ‘No’ to (1), whereas hylomorphism does not.
    What if you were to tell a scientist that the Christian faith involves a commitment to answering ‘No’ to question (1)? Well, I think that a scientist researching the issue would do well to continue with their research program, since if the answer really is ‘No’, they will discover that eventually anyway.
    If the answer turns out to be ‘Yes’, it will be open to the Christian to say that it was a mistake to suppose that Christian faith involved a commitment to a ‘No’ answer, just as it is a mistake to suppose that commitment to Christianity involves a commitment to Ptolemaic astronomy, 7 days creation, and opposition to the view that humans have non-human ancestors. I don’t know whether or not the answer to (1) is ‘Yes’ or ‘No’. I can see why aspects of Christian belief would motivate one to expect a ‘No’ answer, but I don’t see any overwhelming theological case for a ‘No’: I think hylomorphism is consistent with Christian faith, and with a ‘Yes’ to (1), and I also happen to think that there is a good philosophical case for hylomorphism.

    April 23, 2008 — 11:40
  • Enigman

    Re (1) This surely is a question for scientists rather than philosophers and theologians. Well, the original question was, are there good reasons for a theist to believe that robots are unable to think, and so I took (1) in that context. There are such reasons (as above), and they also involve answering “No” to (1), and they have scientific, philosophical and theological parts and aspects.
    Prima facie, the scientific aspects should be the main ones; they alone could lead to decisive evidence, one would think. I follow what you say; but suppose (as I think is likely, on the existing evidence) the answer is “No” because of a quantum-mechanical mind-brain connection. Then such a gap as you mention would be in principle unobservable. Such direct scientific experimentation would be unable to yield the true “No” answer. But the answer “Yes” would be (as it is now) presupposed by most neuroscientists, and so other sorts of apposite experimentation would not be undertaken (as is now the case).
    Now, you are right that it may well be up to scientists to provide the extra evidence that will make the truth more evident. (I certainly wouldn’t tell scientists not to do more of their favoured research; nor would I expect much progress if it was only professional philosophers and theologians involved in this collective endeavour.) But scientists do tend to assume a “Yes” answer when interpretting their data, and when choosing which models to investigate further, and to do so for bad philosophical reasons (and often atheistic ones, since physical closure is hardly so self-evident if one believes that miracles are possible). So if the answer really is “No,” and if there are already good reasons for answering that way (as I think there are), then philosophical theists could have a valuable contribution to make, to the interpretation of that very scientific research. (Of course that would not be a matter of, say, simply telling them that the Bible says that their research is evil, or anything of that all too common sort.)

    April 24, 2008 — 21:42
  • If computers can be intelligent, they can be persons. Imagine now a world where the only contingent persons are computers, and all the computers are intelligent. Then, I think there has to be a fact of the matter as to how many persons there are in the world. But it can be false that there is a fact of the matter as to how many computers are, since there are in general no non-arbitrary ways to count computers. I am writing this comment on a dual-core laptop. Do I have on my lap one computer (with two cores), two computers (one per core), or three computers (the two cores, plus the whole)? Let me complicate it slightly. I just opened a window running an emulator of a PalmOS handheld computer. Do I now have one more computer sitting on my lap?
    What if I had a seven core computer, but the cores were tied together in such a way that they were always doing the same thing (maybe for reliability reasons). Would I have seven computers or one?
    There are other ways of thinking of these issues. But I really doubt there is any way of thinking about these issues which will yield a non-arbitrary way of counting computers, or computer programs for that matter. Hence, computers cannot be persons.
    Note, too, that for some people the argument for dualism goes through the rejection of computer intelligence. If dualism is false, we are computers. Computers cannot be intelligent, but we are. So, dualism is true.

    April 25, 2008 — 16:15
  • Let me give thumbnail sketches of a couple of possible lines of argument against AI.
    1. Causal theories of reference are successfully refuted by Putnam’s rearrangement arguments which show that, unless one has a magical method of reference, there is no way to have the kind of reference that robust realists want. Putnam rejects the robust realism, but one might more reasonably opt for a magical theory of reference. Computers don’t do magic, but immaterial souls might be able to.
    2. Searle has argued that one can reinterpret a computer system running just about any particular program as a computer system running another program. Thus, one can reinterpret a program predicting the weather as a program thinking about trees–if it is possible for a program, or a computer running the program, to think. Thus, if some programs or their computers think, so do just about all programs.
    3. One might see as self-evident the correctness of the claim that there is no understanding of Chinese in the Chinese room.
    4. It seems extremely unlikely that one will be able to extend causal theories of reference to yield reference to, say, moral or other normative facts. Mathematical facts might be easier but still will be hard. Facts of general ontology (e.g., that predication is (or is not) grounded in a relation to properties) will be hard, too.
    The theist tends to believe in moral facts, so this will be a serious objection to the claim that causal theories of reference are correct of us. Moreover, it seems unlikely that any naturalistic theory will make moral facts referenceable. And while one could have a piecemeal theory on which causal theories hold of thoughts about physical facts but some supernaturalistic theory holds of thoughts about moral facts, Ockham’s razor may be against that. Furthermore, it might turn out to be the case that one cannot be intelligent without normative concepts (such as the concept of a reason).
    5. There is also this argument.

    April 25, 2008 — 16:59
  • There’s also a weak inductive argument for the Christian. We know of only three kinds of persons. Humans, angels and God. These three kinds of persons are very different from each other. But what they have in common is that in each of them the thinking is due to something immaterial (here I assume that angels are immaterial; this is the predominant angelological theory, I think, but it has not been unanimously held; in any case, even though in the tradition who denied that angels are immaterial would probably have said that their thinking is due to something immaterial). Therefore, plausibly, thinking is metaphysically tied to something immaterial.

    April 25, 2008 — 17:05
  • Robert Allen

    “Is it physically impossible to build something that replicates the dynamic structure of the human brain – by ‘dynamic structure’ I mean for every state of the human brain, there is a corresponding state in the replicant, for every interaction of the human brain with a body and environment, there is a corresponding interaction of the replicant, and this relationship holds over time.”
    Ben,
    I hope your paper grading is mercifully over.
    Why shouldn’t an anti-materialist of any stripe maintain that the most a scientist would achieve here is a zombie of some sort? Whether one is a substance dualist or a hylomorphist, one should hold, as Enigman points out, that there is only so much a fully functioning brain or some replica thereof could do. “It could not conceive of itself as itself,” to use Locke’s phrase, it could not become aware that it is engaged in some enterprise, it could not choose, will, intend, or love- and the list goes on. In a word, anything that would entail it being a PERSON and, thus, having rights, an inner life and a personal relationship with God Almighty, it could not be/do. The materialists, who tend to be atheists, (Cf. my post “I Don’t Believe in Atheists.”) would like us to believe that we are nothing special. Let’s not fall for it. To answer James’ original question, why won’t computers ever think? Because thinking beings are essentially thinking beings; consciousness is a part of their nature, which was not bestowed upon res extensa.

    April 26, 2008 — 16:59
  • Speaking as a Christian and full-time professional programmer — it’s a mystery to me why so many theists are against AI. Speaking for myself, I expect computers to be able to think on some level after a few more generations of upgrades / architectural changes. I also am comfortable with the idea of human minds working materially, which is probably why the idea of computers thinking seems to me like a logical next step.
    Take care & God bless
    WF

    April 26, 2008 — 23:06
  • Robert, yes, I have finished grading the ontological argument papers. I still have some examinations to grade, but that’s for another day.
    As I stated in my earlier posts, I’m aware that arguments in favour of substance dualism are often, (perhaps I should say almost always) arguments against the possibility of thinking matter. It isn’t surprising then that a philosopher who is a substance dualist will be opposed to AI. Their case against AI will be as strong as their case for substance dualism. One thing I was trying to find out from James is whether his original query was simply ‘What is the best case for substance dualism, and has Plantinga or any other advocate of substance dualism really thought about Fodor, Millikan etc.?’ Well, I know that there are other people out there better qualified to answer that question than I am, and I’m willing to leave it to them.
    But I maintain that the case for hylomorphism doesn’t have to be based on the claim that thinking matter is impossible. Rather, hylomorphism can start with reasons for thinking that every entity we encounter in this world is a composite of form and matter. It is impossible for there to be a purely material object that thinks simply because it is impossible for there to be a purely material object – everything has a form. What makes me special is not that I have a form, but the form that I have is, of course, such as to make me special.
    Of course, as a hylomorphist one has to explain what makes the form of a living human being different from the form of any other animal, or the form of any computer that has currently been built.
    (But much closer to the animal than to the computer, of course – the animal at least has a living body, interacting as it does with its environment. A computer does not wander around searching for nourishment; how could it even begin to think? To me, a thinking robot sounds more likely than a thinking computer, and understanding of the physiological process of thinking as it takes place in human beings likely to precede artificial replication of that process, if the latter ever happens.)
    But just because one subscribes to hylomorphism, and admits that the form of a living human being is qualitatively different from other forms, it does not follow that there are truths about the form of a living human that are not supervenient on truths about the body of that human (i.e. altering the formal truth would require altering the body).
    After all, death takes place when the body ceases to be a living body, when the soul, the form of life, is no longer present. But the difference between life and death is a physical difference – the soul does not simply depart without regard to the physical state of the body.
    Let me turn your question around: how can we derive from hylomorphism the view that the best a scientist could create would be a zombie? Or, can you show that the only arguments that lead to hylomorphism start from premises that also lead to the view that artificial intelligence is impossible?

    April 27, 2008 — 1:40
  • Ben:
    One route from hylomorphism to the denial of AI would be to argue that artifacts, including computers, don’t have forms in the relevant sense.
    WF:
    I do not dispute that a computer could pass the Turing test, that it might behave in ways that could be taken by everybody for intelligent behavior, etc. What I do dispute is that this would be thinking. Here’s one way to think about it. You write the program add.c:
    main(int argc, char**argv) { printf(“%d+%d=%d\n”,atoi(argv[1]),
    atoi(argv[2]),atoi(arvg[1])+atoi(argv[2])); }
    You compile it, and then you run
    add 7 5
    Of course it prints out:
    7+5=12
    But did the computer think that seven plus five is twelve? Not at all. We interpret the computer’s output as meaning that seven plus five is twelve, but that is just our interpretation, and we could equally well take another (maybe this is just an algorithm for producing pretty pixel patterns). So now the question is: What more do we need to add to the program to make it not just print out “7+5=12” but to actually think that this is true? I suspect the answer is that whatever we add to the program, there will be multiple ways of equally valid interpreting what the program is saying, and hence it will not be a case of the computer thinking.

    April 28, 2008 — 14:57
  • Ben:
    Somehow, I find it implausible to suppose that whenever you manipulate a bunch of matter into the right positions, a form of a given kind metaphysically has to pop into existence (or has to be created by God?) Yet that is what the supervenience claim would require. So while the supervenience claim is compatible with hylomorphism, it is not very plausible given hylomorphism.
    Consider also the vagueness issue. Start putting together something that has particles arranged Ben-Murphy-wise, maybe in a world that contains only God and these particles. First this electron, then that photon, etc. When you have just the electron and the photon, surely there is no form there, beyond the electron’s form and the photon’s form. Moreover, the question whether there is a further form cannot have any non-epistemic vagueness (else we get vague existence). On the supervenience view, eventually a form will pop into existence. You’ll add one particle, and while previously there was no form of the whole, now, poof, there is a form of the whole. Moreover, it is metaphysically necessary, given the supervenience claim, that the form pop into existence just then, not one particle earlier and not one particle later. This seems implausible.

    April 28, 2008 — 15:05
  • Alexander – of your two last posts, I agree with the first, but disagree with the second.
    In your second post you say:
    “When you have just the electron and the photon, surely there is no form there, beyond the electron’s form and the photon’s form.”
    But in general, the form of a whole is not simply the same as the forms of the parts. When I put bricks together to make a house, a new form gradually comes into existence as the bricks are re-arranged. When the bricks form a house, there is more than this brick’s form and this brick’s form. It is an interesting question how a hylomorphist should deal with the issue of vagueness, but I don’t see that this problem tells against AI in particular (perhaps it undermines hylomorphism). A house cannot vaguely exist, but something that is vaguely like a house could – I don’t know whether that helps.
    Does a house, as an artifact, lack a form ‘in the relevant sense’? Well, it lacks the form of a living being, obviously, but if perceiving an x involves the x’s being formally present to me, then the house had better have I form, because I certainly think about it. Hylomorphism that restricts ‘real’ forms to living beings won’t play the role I want hylomorphism to play in philosophy of mind. And it seems obvious that in some instances at least, I can change the form by changing the physical structure: when a butcher cuts an animal’s throat, it ceases to be a living being. If I demolish the upper story of a two-story house, it becomes a bungalow.
    I agree though that we shouldn’t describe a computer as thinking things about numbers when it makes calculations. My pocket calculator thinks about numbers in much the same way as my camera sees things: that is to say not at all, although they do enable us to extend our range of vision and calculation.
    Just incidentally, I am writing this in an office I used to share with a computer scientist named Gerry – he died a couple of years ago. I remember him commenting that decades ago, he thought that AI was just around the corner, and that is was just a matter of extending the kind of operations computers were already performing, adding levels of complexity. And yet, he said, although we now have computers that are capable of beating the best human chess players, and yet it seems that computers have come no closer to thinking like a child, let alone an adult human being. I don’t think we are on the verge of creating AI by refining current computer programs.
    If you were to ask me what computers lack, I’d say that they don’t have purposes of their own – but I find this idea very hard to spell out.
    What I’m getting at is something more basic than the concept of free-will: whether or not a dog has free-will, it certainly engages in purposeful behaviour: it is frustrated if it fails to get what it wants. A computer does not know joy when it wins a game, nor frustration when it loses. And of course, I wouldn’t be convinced, if a computer were programmed to show a smiley face when it won, that it really enjoyed winning.
    Then I imagine playing chess with a robot.
    The robot’s batteries are running down, and this is affecting it’s performance. Reluctantly, (because it is enjoying the game), the computer tears itself away so that it can plug itself into the wall and renew its power.
    My description, incorporating words like ‘reluctantly’ and ‘enjoying’ begs the question. You could say that having correctly refused to fool myself into thinking something is happy because it is smiling, I now suppose that the addition of artificial legs is enough to create the reality of thought. Or I’m supposing that because the energy source is analogous to food, the robot’s response to being deprived of its energy source is the same as my response to being short of food. But, although I recognise these sources of potential error, I still can’t help thinking that this scenario brings to light an important distinction between a computer and a dog, a gap that might be bridged by a robot: computers just don’t know how to take care of themselves. Dogs do.

    April 28, 2008 — 17:44
  • One further thought.
    I can see that there are theological motivations for supposing robots cannot think, and that there are at least two philosophical theories about body/soul that have been common amongst theists: substance dualism and hylomorphism.
    I can see why, although strictly speaking it does not follow from the substance dualism’s being true of humans that a robot could not think, still, the philosophical arguments that motivate substance dualism rule out thinking robots, or at least make it very unlikely that there could ever be truly artificial intelligence. (If ‘artificial’ means ‘entirely constructed by humans). I cannot see that these philosophical reasons carry over from substance dualism to hylomorphism.
    (1) Substance dualist: a purely material object cannot think. But a robot would be a purely material object, therefore unable to think.
    Hylomorphist: there is no such thing as a purely material object. A robot would be a composite of form and matter, as we are.
    (2) Substance dualist: Sure, a working brain is a physical substance, and so perhaps, in principle, one could construct an exact physical replica of a working brain. But the brain works as it does because it communicates with a non-physical object, a soul – and we have no idea how to go about constructing a non-physical object in a lab.
    Hylomorphist: A working brain is a hylomorphic substance. Perhaps, in principle, one could construct an exact physical replica of a working brain. If it really worked the same way our brain does, it would have the same form.
    The theologian who is a hylomorphist has the same theological motivations as the theologian who is a substance dualist to deny the possibility of a thinking robot. But I sense, perhaps wrongly, that some people think that the theological hylomorphist has a strong theological motivation for finding a compelling philosophical motivation to rule out a thinking robot – a good Christian philosopher should only subscribe to a philosophical theory of mind/body that provides philosophical reasons for backing up the theologically motivated reasons for thinking that a robot cannot possibly think,or at least the attempt should be made to find the philosophical reasons. Are people thinking what I think they are thinking about unthinking robots, or must I think again?

    April 28, 2008 — 20:10
  • Ben:
    I am not sure that the idea that something vaguely like a house could exist helps. Let’s try as follows.
    Consider a sequence of possible worlds, w2,…,wN, where N is the number of particles in my body. w2 contains exactly two disconnected particles that are copies of those in my body, arranged spatiotemporally as they are in my body. Maybe one mirrors a particle from my big toe and the other one from my heart. How many enmattered entities (each with its own form, of course) are there in w2? Surely the right answer is 2. Then keep on going. w3 is like w2, but it adds one more copy of a particle from me, located as it is in the actual world. And so on. Finally, wN contains a copy of all the particles in me, arranged just as they are in me. By the supervenience claim, wN contains, over and beyond the particles, a person.
    Now there are several ways of reading hylomorphism. On one view (defended by Patrick Toner and accepted by me), parts (e.g., particles) exist only virtually. On this view, wN contains only one enmattered entity. On a view on which the parts genuinely exist, wN contains at least N+1 enmattered entities–the person, plus the N particles, plus any other entities intermediate in size between the person and the particles (e.g., organs, molecules, etc.)
    Now “normally” adding a particle to the nth world only adds one more enmattered entity, so that normally the number of enmattered entities in the (n+1)st world is one plus the number of enmattered entities in the nth world. But at some points there are jumps–there have to be, or else wN would end up with N entities, not N+1 or 1. Some of these jumps might be easy to account for, because it could be that molecules are substances, so if you’ve got one hydrogen and one oxygen atom correctly arranged, and an extra proton sitting around, and then you add an electron, poof you’ve got an H2O molecule. Nevermind the jumps in the number of entities due to the coming into existence of a new molecule. But there will be other jumps, either from the coming into existence of the person or from the coming into existence of the person’s organs or other mid-size parts.
    But there seems to be something arbitrary about where these jumps happen. A heart with one less electron could still be a heart. It would be strange if there were a necessary truth that when you transition from the nth to the (n+1)st world, by adding one more particle, the number of enmattered entities jumps by 2 because, say, something vaguely like a heart or vaguely like a person comes into existence along with the particle. It seems plausible that it would have been possible for the jumps to happen at different points in the sequence.
    Another way to put the argument is to count the number of forms instead of the number of entities. Initially you add one particle, and the number of forms goes up by one. But eventually this stops being true. Why does it stop being true at that point rather than at another?

    April 29, 2008 — 15:51
  • By the way, in regard to the chess playing robot, it’s easy to anthopomorphize. When our Roomba finds itself low on power, it stops searching the room, and diligently and perhaps with a touch of desperation it searches for its charging station, and if it finds it, it adjusts its position so as to sit more comfortable, and emits a set of happy chirps. 🙂

    April 29, 2008 — 15:53
  • Alexander, it might help to think of this in reverse. When a butcher kills a pig, what was a living body becomes a carcass, because the form of a pig ceases to be present. But when does this happen?
    In principle, this seems to be the same problem as building me up piece by piece: some physical alteration is the difference between the presence or absence of the form of a living thing. I don’t see how we could have two pigs that go through exactly the same physical changes, but one is still a living pig and the other a dead pig, so supervenience seems to hold.
    Of course, deciding what constitutes the moment of death can be difficult, but it is not simply an arbitrary matter. We know what a healthy living pig is capable of, and the question is which capabilities are necessary/sufficient for the pig to count as being alive.
    Suppose that when the blood pressure falls below a certain point, the brain is no longer capable of functioning. Then there will be some particle of blood, such that it is the departure of this particle of blood from the pig’s body that brings about its death – but not because this particular particle has some special properties that other particles lack.
    If hylomorphism can’t account for the dying pig, then what use is hylomorphism? And why would the correct solution to the problem about the death of the pig imply the impossibility of a thinking robot? There isn’t an immediate, obvious connection that I can see.

    April 29, 2008 — 17:56
  • Well, my views about time are such that I am not, as far as I know, committed to there actually being a precise time of death. It need not be the case that every event has a precise time. The question whether the pig is already dead at t can already be indeterminate epistemically due to the need to specify a reference frame. But I do not know that even after one specifies a reference frame and a time (or, simply, specifies a spacelike hypersurface) the question whether the pig is already dead by that time needs to make sense, because I do not know that there needs to be that tight a correlation between the pig’s internal time and external time. OK, none of this makes much sense because it depends on a lot of very controversial and eccentric views about time.
    But suppose I put that all behind and I go the presentist route, since only then can I be sure that your question is an exact parallel. If I go the presentist route, then at any given time there had better be a fact about whether the pig (or anything pig-like) exists. And, yes, it does seem as if the time of transition is arbitrary. There seems to be no reason to suppose that one more particle of blood lost would correlate with the pig’s being dead.
    Note the word “correlate”. The pig’s being dead consists in the destruction or at least departure of its form. This correlates with the body’s falling apart, and the correlation is not merely coincidental.
    I think there are two good ways of accounting for the apparent arbitrariness of the moment of death.
    1. God decides, in each case or once and for all (say by promulgating a very precise, perhaps disjunctive, rule about it), exactly at which point the form is destroyed or departs.
    2. There is a law specifying precisely at which point the porcine form is destroyed or departs. This is a law about pigs, and presumably grounded in the essence of the pig. But, plausibly, there could be beings very much like pigs, made of the same kind of stuff, but whose essence is slightly different, so that they die one particle of blood later.
    It seems to me implausible that there should be a metaphysically necessary law that says that whenever you get particles arranged a certain way, then you get a pig there. What made option (2) above somewhat plausible is that the law was grounded in the essences of pigs. It seems implausible to me to suppose that electrons, quarks and the like have written in them a law specifying when porcineity arises. But without supposing such a law written in the essences of electrons, quarks and the like, it seems quite possible to have a bunch of electrons, quarks and the like arranged in the copy of a pig, but with God ensuring that the form of porcineity is not there. After all, the form is more than the arrangement of the parts–it is what explains the arrangement of the parts. Just getting the parts arranged right does not seem to necessitate the form magically popping into existence.
    All that said, it is possible on my view that if we build a robot of a certain sort, a form will pop into existence (e.g., by divine fiat) and inform the robot. It’s just that this seems an implausible supposition.

    May 2, 2008 — 12:51