I’ve just re-read Paul Griffiths’ and John Wilkins’ inspiring paper on evolutionary debunking arguments (EDAs) for religion (it is a very influential paper on cognitive science of religion and evolutionary debunking, despite its not having appeared in print yet) for a chapter of a monograph I’m writing. Using Guy Kahane’s debunking genealogical framework, they argue that natural selection is an off-track process, i.e., one that does not track truth: it produces beliefs in a manner that is insensitive to the truth those beliefs. From this, they conclude that the beliefs that are the outputs of evolved systems are unjustified.
Causal premise. S’s belief that p is explained by X
Epistemic premise. X is an off-track process
Therefore, S’s belief that p is unjustified
When we apply this argument in a generalized manner, where X stands for “natural selection”, this looks like a bad strategy for the naturalist – ultimately, it leads to self-defeat in a Plantingesque manner that most proponents of EDAs would like to avoid. G&W’s position is more subtle: they don’t want to treat truth-tracking and fitness-tracking as competing explanations (as Plantinga seems to do), instead, they argue that fitness-tracking and truth-tracking operate at different explanatory levels. In many cases, tracking truth *is* the best way of tracking fitness, especially given (1) that cognition is costly (brains consume a lot of energy), (2) your beliefs influences how you will behave, (3) your behavior influences your fitness. They propose “Milvian bridges”, which link truth-tracking and fitness-tracking, in order to salvage commonsense and scientific beliefs.
The term Milvian bridge is coined in reference to Constantine’s victory at the Milvian Bridge, which was traditionally interpreted as evidence for the truth of his Christian beliefs, as he triumphed over his pagan opponents: Christianity was argued to be true because it was pragmatically successful. In this vein, they construct a Milvian bridge for commonsense beliefs, which are argued to be true because they are pragmatically successful (the Milvian bridge has some affinity with Reid’s philosophy and with the no-miracles argument in philosophy of science):
“Organisms track truth optimally if they obtain as much relevant truth as they can afford, and tolerate no more error that is needed to obtain it.”
For science, they propose an indirect Milvian bridge, as follows
“Given the Milvian bridge connecting commonsense to pragmatic success, we can justify the methods by which we arrive at our scientific beliefs. The reasons we have to think that our scientific conclusions are correct and that the methods we use to reach them are reliable are simply the data and arguments which scientists give for their conclusions, and for their methodological innovations. Ultimately, these have to stand up to the same commonsense scrutiny as any other addition to our beliefs. Thus, if evolution does not undermine our trust in our cognitive faculties, neither should it undermine our trust in our ability to use those faculties to debug themselves – to identify their own limitations, as in perceptual illusions or common errors in intuitive reasoning.”
For religion, by contrast, they think no Milvian bridge can be constructed, because the mechanisms that lie at the basis of religious belief are not truth-tracking. G&W consider some evolutionary accounts, for instance, that religion is the result of an overactive tendency to attribute agents to the environment. They write:
“The idea that religious belief is to a large extent the result of mental adaptations for agency detection has been endorsed by several leading evolutionary theorists of religion…Broadly, these theorists suggest that there are specialized mental mechanisms for the detection of agency behind significant events. These have evolved because the detection of agency- “who did that and why?”- has been a critical task facing human beings throughout their evolution. Religious belief has been jokingly described as “taking the universe personally”, and on this account, that is precisely correct. None of the contemporary evolutionary explanations of religious beliefs hypothesizes that those beliefs are produced by a mechanism that tracks truth. […] If the agency detection account is correct, then people believe in supernatural agents which do not exist for the same reason that birds sometimes mistake objects passing overhead for raptors. These beliefs are type one errors and they are the price of avoiding more costly type two errors.”
As Godfrey-Smith already wrote in his essay on signal detection, it would be totally pointless to have an agency detection system if it did not, at least sometimes, produced correct beliefs. So the system seems to be truth-sensitive (directed at detecting agents). G&W seem to acknowledge this, but argue that agency detection, while not truth-insensitive, is still off-track because it does not detect agents reliably. It generates an excess of false positives of the “better safe than sorry” kind, because the costs of failing an agent are higher than the costs of detecting an agent that isn’t there, the system will err at the side of safety and detect a lot of agents that aren’t there. G&W use this as a debunking strategy to argue that religious beliefs are unwarranted.
I have been thinking that the generality problem remains a big problem for EDAs against religion, and for EDAs against other complex cognitive capacities, such as our capacity to form moral evaluative judgments. For example, in the case of religion, if we grant for a moment that agency detection is responsible for the generation of religious concepts (ghosts, gods etc), at what level of generality should we consider the operation of this capacity? Under experimental conditions people are pretty good at detecting and identifying agency-like movements on computer animations (a lot of Heider and Simmel like studies show that people can make fine-grained distinctions about motions, e.g., playing, chasing, and even flirting. Remarkably, people are better at spotting living things on images than lifeless things: when they are presented with pictures of quasi-identical scenes and one little thing changed, they are more likely to spot what changed if it is an animal (e.g., a small pigeon) than if it is a conspicuous red truck or a building. So while there might be good theoretical reasons for expecting an error-prone, unreliable capacity to detect agents, there is to my knowledge no direct empirical support for this prediction.
Now sometimes agency detection does go awry, as when we hear wooden planks creak in an old house and form the belief that there’s a burglar in the house. But many more times, we form the belief that there is an agent, when there actually *is* an agent (e.g., when you see someone walking across the street from you on a clear day. When debunkers of religious belief appeal to hyperactive agency detection, they are already assuming that the agent that is being detected (e.g., God) is of the false-positive kind, the sound in the dark room. But I don’t see how they can assume this in a non-question begging sense.
And this problem becomes even more pressing if we consider that there are multiple evolutionary hypotheses on the origin of religious beliefs: some say it is an adaptive illusion to enhance cooperation (Wilson, Bering), others that it is the result of an intuitive distinction between the mind and body (Bloom), or a result of out propensity to attend to intriguing, counterintuitive ideas (Boyer). So, in order to run an EDA against religious belief, we would have to specify what of what type(s) of cognitive processes religion is a token. As long as this does not get cleared up – and given the imbalance of theory and empirical research in cognitive science of religion, it’s not going to be cleared up soon – we cannot do this in a rigorous, principled manner. Without this specification, we cannot say that religion is the result of unreliable (or more generally off-track) cognitive processes.