I just know I’m right – what to do when you feel this in religious and philosophical disagreement
October 15, 2013 — 5:28

Author: Helen De Cruz  Category: Religious Belief  Tags: , , , ,   Comments: 10

[note: this blogpost collects some scattered thoughts I hope to organize in article form sooner rather than later, for my British Academy project on religious social epistemology, see here]

There is an ongoing debate what we should do when we are confronted with disagreement with an epistemic peer; someone who is as knowledgeable and intellectually virtuous in the domain in question. Should we revise our beliefs (conciliationism), or not engage in any doxastic revision (steadfastness)? Epistemologists aim to settle this question in a principled way, hoping general principles like conciliationism and steadfastness can offer a solution not only for the toy examples that are being invoked, but also for real-world cases that we care passionately about, such as scientific, religious, political and philosophical disagreements. However, such cases have proven to be a hard nut to crack. A referee once commented on a paper I submitted on epistemic peer disagreement in science  that the notion of epistemic peer in scientific practice was useless. S/he said “It works for simple cases like two spectators who disagree on which horse finished first, but when it comes to two scientists who disagree whether a fossil is a Homo floresiensis or Homo sapiens, the notion is just utterly useless.” 

That referee comment has always stuck in my mind as bad news for epistemology: if we can’t use our principled answers in epistemology to apply to real-world cases of epistemic peerage, the debate is of marginal value. There seems to be an easy escape: one common response, both by steadfasters and conciliationists has been that we need not revise our beliefs in complex messy cases if we have reason to believe that we have access to some sort of insight that our epistemic peer lacks. van Inwagen, for instance, muses about his disagreements about some philosophical matters with David Lewis, whom he greatly respects: they both know the arguments, and both have considered them equally carefully. But ultimately, van Inwagen thinks

I suppose my best guess is that I enjoy some sort of philosophical insight (I mean in relation to these three particular theses) that, for all his merits, is somehow denied to Lewis. And this would have to be an insight that is incommunicable- -at least I don’t know how to communicate it–, for I have done all I can to communicate it to Lewis, and he has understood perfectly everything I have said, and he has not come to share my conclusions.

As one can see, the notion of epistemic peer simply dissolves here, since van Inwagen just asserted that he has insights in the domain in question that are denied to Lewis. To take another example, suppose you are a Christian faced with a seemingly equally intelligent atheist. According to Plantinga (WCB), this disagreement is not a defeater to your beliefs, as you can confidently assume your dissenting peer “has made a mistake, or has a blind spot, or hasn’t been wholly attentive, or hasn’t received some grace she has, or is blinded by ambition or pride or mother love or something else”. But how do we know when we are right? Is the “feeling of knowledge”, the conviction we are right, any indication that we actually are right? I will argue here that it is not, and therefore, that simply discounting the other as epistemic peer on account of this is not warranted.


Many philosophers seem to take this feeling of confidence (that they are right) as guidance. For instance, according to Elga one is not required to doxastic revision if one does not think the other is an epistemic peer, and he acknowledges that this is often the case in real-world messy examples. The Christian in Plantinga’s example has just downgraded his seemingly equally intelligent and thoughtful dissenter on account of some cognitive shortcomings or flaws in character that prevent her from being a true epistemic peer. Elga believes that doxastic revision is required if one considers the other as epistemic peer, whether that is really the case or not. So in that view, the Christian who honestly believes she has some basic belief, brought about by a reliably working sensus divinitatis, is rationally justified in discounting other views since she thinks that those people are not her epistemic peers. 

Elga provides this view to solve the problem of spinelessness that conciliationism faces: if you are a conciliationist, it seems your beliefs (philosophical and otherwise) are constantly in danger of being flooded by the opinions of others–indeed, the conciliationist is being charged with inconsistency, since their position would require them to revise their conciliationism since they are confronted with all these steadfasters! The problem with Elga’s view, according to Lackey (in a forthcoming paper), is that it fails to take into account that people can discount others as epistemic peers for irrational reasons. She provides a vivid example of a sexist and a woman who are in disagreement. The sexist thinks he is right, since he believes the woman, being a woman, is simply less intelligent than he is. The woman, by contrast, thinks the sexist is her epistemic peer. Following Elga’s recommendations, we come to the outrageous conclusion that the woman should revise her beliefs or else risk being irrational, whereas the man should stick to his guns. That seems like an unacceptable conclusion.

Can we discount the other as an epistemic peer simply by virtue of feeling we that we know p to be the case? There has been research on the feeling of knowledge (FOK) in metacognition, the feeling you know something to be the case. It turns out that FOK is not a reliable measure of knowledge (research by Koriat and others). People systematically overrate how well they know a series of facts on which they will be tested, for example. People think they know in detail how a helicopter and a zipper work, until asked to demonstrate it, and exhibit surprise at the lack of their depth of knowledge. One could object, such things aren’t basic knowledge, like 2+2=4. Religious belief is properly basic, therefore, we need not be troubled by the unreliability of FOK. However, our personal memories are also properly basic in many instances, yet the correlation between the feeling you remember something vividly and the actual occurrence of this event is quite weak, as research on eyewitness testimony indicates (it is relatively easy to implant a personal memory, Loftus and others have found). I’m not saying that the unreliability of FOK should lead to some widespread skepticism. I am saying that FOK is too weak to be a defeater defeater, i.e., a defeater of the negative evidence that the fact that an epistemic peer disagrees with one, provides. 

A Reformed epistemologist may have the easy way out that FOK doesn’t play a role because the properly basic belief is after all, instilled by a reliable belief-producing mechanism (the sensus divinitatis). Similarly, suppose one is a philosopher who does happen to have brilliant insights that his or her colleagues lack that lead to genuine knowledge. Then (assuming a knowledged-centered epistemology) yielding or revising would not be a good response for that philosopher. Indeed, it would be better for that philosopher not to engage in too much debate with other (seemingly equally) brilliant colleagues, lest she revises her beliefs and now comes to an overall worse philosophical picture. But we know from experience that this is not the case, that even a very good philosopher can improve by exposure than others, or that papers can drastically improve by probing and deep criticisms of referees and editors. Indeed, there is a large literature on group cognition indicating that focused groups of specialists outcompete their most brilliant members.

In a series of papers, Dan Sperber and colleagues have an interesting hypothesis to explain why we have FOK and other cognitive biases that drive us to overestimate our knowledge (e.g., confirmation bias) if our reasoning is so fallible: reasoning evolved in the context of argumentation, not in the context of individual, cognizing minds. So systematically overrating the quality of our own evidence, or feeling some sense of insight, are all mechanisms that protect us from becoming too gullible or too susceptible to influences from others, who may have their own, rather than our, interests in mind (indeed, research suggests that even young children are selective in their trust). Given the unreliability of FOK, but also the pragmatic problems involved with being too yielding, the proper response in peer disagreement in my view would be to continue to be open to the evidence of others, and not automatically take their position or one’s FOK as evidence that the other is not an epistemic peer after all.  Some degree of steadfastness helps protect what one is passionate about (and feels worth protecting), and it helps  prevent one from being easily won over. This combination of openness and steadfastness would protect the woman from the sexist’s arguments. It also even helps those faced with epistemic superiors: the junior philosopher confronted with a formidable counterobjection of a much more senior colleague should revisit her views, but still it seems rational not to give them up immediately. 

In the case of the religious believer, there may be good reasons in the Jamesian sense to maintain faith even in the face of uncertainty and doubt (brought about, amongst other by epistemic peers), because some goods only obtain if one has this faith. I am not defending a fideist attitude here. One can have several rational and pragmatic grounds for having faith. What I am saying is that the proper attitude to religious and philosophical (and other messy cases) of religious disagreement should not be: “Well, that just goes to show they’re not our epistemic peers, they must be missing some grace or insight or something”.

Rather, this sort of disagreement does provide serious counterevidence that should lead one to adopt a critical atttitude towards one’s beliefs (not necessarily a yielding attitude as I described above). This is by the way not bad for the religious believer: the fact that a large percentage of philosophers of religion (70 to 75% depending on the survey, PhilPapers or mine) are theists is of high epistemic significance, I believe, especially given that they are well aware of formidable objections against theism. Similarly, the fact that conciliationists are winning over new converts (like me, I used to be a steadfaster) should spell good news for conciliationism. I think this open attitude of doubt in the face of epistemic disagreement, combined with a continued commitment, would work well for many messy cases. 

 

Comments:
  • Matthew Mullins

    Plenty to comment on here Helen!
    1. The Lackey example seems to trade on internalist and externalist notions on rationality. If acting rationally amounts to having consistent beliefs and acting according to your evidence, then it the sexist and woman both appear to be acting appropriately even though their beliefs aren’t tracking the truth.
    2. You seem to run conciliationism and equal-weight views together. The former might modify their credences without giving their peer full equal weight. e.g. equal-weight amounts to splitting the difference and conciliation needn’t.
    3. I don’t think the FOK can play the role of a defeater defeater because it’s two week, but because, ostensibly both peers have the FOK. e.g. It seems to me that P and it seems to you that ~P. That’s what lead to the disagreement in the first place. So the FOK doesn’t give us any asymmetry.
    4. A number of conciliatory people don’t think that belief is the appropriate doxastic attitude for philosophical these. They defend, adopt, accept, etc, but they don’t believe them, which is consistent with their conciliatory arguments. Faith may act in a similar fashion for the theist.

    October 15, 2013 — 10:24
  • Helen:
    It seems to me that the feeling that one knows about some subject or that one knows how something (say, a helicopter) works and the feeling that one knows that p are quite different.
    Matthew:
    Regarding 2, it’s worth noting that there is more than one kind of equal weight view you could have.
    a. Replace your probability with the average of your and your peer’s probability.
    b. Replace your log-odds (i.e., log (p/(1-p)), where p is the probability) with the average of your and your peer’s log-odds.
    c. Replace your log-odds with the sum of your and your
    peer’s log-odds.
    All of these give equal weight to your and your peer’s credence in the sense that they treat the two credences symmetrically.
    Both a and b are “splitting the difference”, but they give very different results in cases where one of the peers is close to certain and the other is uncertain–in the case of option b, near-certainty swamps uncertainty. And c is at least sometimes the right approach–that a peer agrees with me should sometimes raise my credence.

    October 15, 2013 — 11:46
  • Helen, you say “It turns out that FOK is not a reliable measure of knowledge”.
    Psychologists use “feeling of knowing” only to describe one’s feeling that one will be able to retrieve information from memory (a feeling had only prior to one’s attempt to retrieve the information). I don’t know of anyone who has discussed the FOK as perhaps a generally reliable guide to knowledge, or even to truth. While the FOK is a metacognitive feeling, it is strictly a metamemorial feeling–it only concerns memory, not simply anything we happen to feel right about.
    So your use of “feeling of knowing” appears not to track what psychologists are talking about. You seem to be describing what they might call the feeling of *confidence*. And while many studies suggest that the feeling of confidence that is particularly associated with *episodic memory* does not positively correlate with correctness, this is not evidence that the feeling of confidence is generally inaccurate. Indeed, other studies suggest that the feeling of confidence that is associated with *semantic memory* positively correlates with correctness.
    In short, I don’t think the empirical research on the FOK is what you want to be looking at here, and you haven’t captured some relevant nuances in the research on the feeling of confidence.

    October 15, 2013 — 16:00
  • Helen De Cruz

    Hi Matt: thanks, the present metacognitive literature looks at feeling of knowledge rather narrowly, as the feeling right before retrieval, and often researched in conjunction with the “tip of the tongue” feeling, where we think we can retrieve memory if given enough time or opportunity. So I was using it in a wider sense, and if that is confusing, I ought perhaps to have used a different term, to refer to metacognitive sentiments that we know something to be the case (whether semantically retrieved facts or episodic memories). I am thinking about your idea of using instead the term “feeling of confidence”.
    There does indeed seem to be a positive correlation between our confidence that we know facts drawn from semantic memory and our actual ability to retrieve those facts, whereas the episodic literature is less clear on this (false feelings of confidence in eyewitnesses). So the question is what kind of metacognitive states our feelings about philosophical, religious etc states seem to be tracking. Are they like facts drawn from semantic memory or more like episodic memories?

    October 15, 2013 — 16:34
  • I have a really hard time believing that there is no correlation between a feeling that we know the answer to a question and whether we know the answer to a question.
    Intuitively there are infinitely many questions about for which just about all of us would have no feeling that I know the answer. Questions like: What is the 1873rd digit of pi? (Note how the last one generates an infinite sequence of digits.) What is my weight in micrograms? How many electrons are there in this room? What is the fiftieth particle to come into existence in the universe?
    On the other hand, the number of questions to which a sane person would feel they have an answer will intuitively be quite limited.
    For one, past a certain level of complexity in a question, people will simply have no idea what is being asked! If this is right, then in fact there will be infinitely many questions a typical sane person will be quite sure she has no answer to (What’s the 1873rd digit of pi? What’s the 1874th? And so on) but only finitely many that she would feel she knows the answer to.

    October 15, 2013 — 20:29
  • Matthew Mullins

    Fair enough Alex. Quick jotting this morning, I just meant to express that conciliation needn’t result in serious revision.
    Helen,
    You say that “many philosophers seem to take this feeling of confidence (that they are right) as guidance” and I’m trying to figure out who you have in mind here? You seem to offer Elga as an instance of this, but if I recall correctly Elga thinks you can downgrade a peer if upon learning of the disagreement your peer’s assertion strikes you as insane. He also thinks that peers are rare in the world, which is why I took it that in real-world messy examples we don’t usually have peers. We don’t have peers in the real-world not because of our feeling of confidence, but because of the difference in our background beliefs. Pretty much everyone I can think of thinks that you break the peer bond by pointing to some personal private evidence, PvI’s insight, or they can point to something wrong with the peer that wasn’t evident prior to the peer assessment; they’ve been drinking, they’ve got an ulterior motive, etc. I haven’t looked at this literature in like, a minute, but I can’t think of anyone defending the feeling of confidence. Maybe you mean this as just a general assessment of philosophers.
    In any case, I still have the worry that I expressed in three (3) above. Suppose you think, as a number of people do, that you need to point to some asymmetry to break a case peer disagreement. Could the ‘feeling of knowledge’ do this work? It doesn’t because while I have the feeling that I know P, my peer has the feeling that they know ~P. So if the symmetry is maintained, whether FOK is reliable or not doesn’t seem to matter.

    October 15, 2013 — 23:35
  • Beyond Logic: Why Do We Disagree?

    Beyond Logic: Why Do We Disagree?

    October 16, 2013 — 1:47
  • Dianelos Georgoudis

    Helen,
    I think a significant issue here is the question of who an epistemic peer is. I don’t think it’s solely a question of epistemic virtue, by which I take it you mean intelligence, education, intellectual honesty, and such. I think that’s not sufficient, since as a matter of fact two people of similar epistemic virtue may have significantly different ways of experiencing and judging matters. A good example I suppose is the case you mention of van Inwagen and Lewis.
    It seems to me that this state of affairs is explained by the fact of how, under the normal cognitive processes of the human condition, one’s experiences and judgments lead to belief formation turns out to be a two-way street. Everybody is aware that experiences and judgments (including one’s judgment about which epistemic principles to trust) lead to beliefs. But the effect works the other way too.
    So, first, beliefs affect one’s judgment. An example would be the Scholastics who were moved by their belief in God to judge that reality must be fundamentally intelligible, and therefore used intelligibility as a fundamental epistemic principle in metaphysics. An example in the other extreme that comes to mind is physicist Lawrence Krauss in his debate with William Lane Craig arguing that quantum physics (by which he means his metaphysical interpretation of quantum physics) shows that classical logic doesn’t hold in the real world.
    And, secondly, beliefs affect one’s experiences. Some cases are obvious. So if one has learned Chinese one experiences the sound of spoken Chinese quite differently than one who hasn’t. It is also very frequently the case that after one has formed true beliefs about some subject matter to experience this subject matter as more beautiful and meaningful. (This holds in math, in the physical sciences, in arts, in sports – I’d say in about every endeavor that entails belief formation.) But I have in mind a stronger effect yet. I submit that viable metaphysical belief systems (such as naturalism and theism) tend transform one’s experience of life in a way that reflects or reinforces that belief system. Both qualitatively in the way one experience life, but also quantitatively, since theists and naturalists are apt to make different life choices.
    In conclusion the epistemic chasm which often appears to separate theists and non-theists (but also sometimes theists among themselves) is I think grounded no so much in differences in epistemic capacity or virtue, but in the fact that their different beliefs affect how they experience and how they judge matters. And therefore affect their capacity of communication.

    October 17, 2013 — 15:42
  • Helen De Cruz

    Hi Matthew: sorry for the delay in response (there should be a way of getting prosblogion alerts also when co-bloggers respond, which is not now the case).
    I don’t seriously think that any expert on peer disagreement would defend the view that one can go by feelings of knowledge, confidence etc. to asset epistemic superiority. I am just saying that this feeling is in effect driving us to say the other person is not an epistemic peer.
    Huemer recently argued for an agent-centered epistemology combined with a phenomenal conservatism. If I intuit that p, I am (absent defeaters) to some degree justified in believing that p. By contrast, if I know you intuit that p (and I have no intuitions about p), I don’t have as much justification for forming a belief that p than in the first instance. He then takes the example of epistemic peer disagreement to argue that rational disagreement is possible, since both agents accord different weights to evidence, depending on whose evidence it is. This position is more sophisticated than the ‘I know I’m right, therefore the other must be wrong’ position since it allows that the other, also valuing his own intuitions etc more highly, can be equally justified in believing not-p as you are in your belief that p. The symmetry is not broken.
    However, in some cases people may differ in the confidence they have in which case it might matter. For example, eyewitness testimony is a very tricky thing, subject to contamination through suggestion, but under some conditions (e.g., recognizing a person’s face) confidence does have a positive correlation with the reliability of the memory. In other cases it doesn’t. We have no clue, of course, in how far the pull of particular intuitions provides a guideline to their truth, and this is where religious disagreement takes place.
    The problem is that if real cases of peer disagreement are so rare in real life (as you and many others think), the whole epistemic peer disagreement debate is only of marginal importance. It would only seem to matter under very specific and rarely realized conditions. Yet in scientific practice, religious beliefs, politics and lots of other stuff we care about, disagreement between people who seem equally well-versed in the subject occurs lots of times. I would be inclined to call those cases instances of peer disagreement, even if the parties do not have identical access to all evidence. For instance, for religious disagreement, it suffices if parties have access to the same evidence in the narrow sense, suppose, for instance, they disagree about the soundness of a cosmological argument. They are both equally well versed in the objections etc for this argument, as well as in relevant scientific knowledge (e.g., the Big Bang theory), but they may still differ in background evidence in the broader sense, e.g., their assessment of this argument may be indirectly influenced by their upbringing (Christian and atheist respectively), the graduate school they went to and so on.

    October 19, 2013 — 14:02
  • Jeremy Gwiazda

    Bit late to this thread, but found it interesting. In reply to Matthew’s: “You say that ‘many philosophers seem to take this feeling of confidence (that they are right) as guidance’ and I’m trying to figure out who you have in mind here?”

    And Helen’s “I don’t seriously think that any expert on peer disagreement would defend the view that one can go by feelings of knowledge, confidence etc. to asset epistemic superiority.”

    When I read this exchange I thought of Kripke in Naming and Necessity writing that confidence is all anyone has to go by, at the end of the day. But I forgot the passage, Kripke’s wording, and if he even said anything like this.

    I finally tracked down the passage, on page 42 of the paperback: “Of course, some philosophers think that something’s having intuitive content is very inconclusive evidence in favor of it. I think it is very heavy evidence in favor of anything, myself. I really don’t know, in a way, what more conclusive evidence one can have about anything, ultimately speaking.”

    A lot here hinges of what is meant by “intuitive content.” But if we use the definition of “intuitive” — “using or based on what one feels to be true even without conscious reasoning; instinctive,” then Kripke may be in the ballpark. (Not claiming he is an expert on peer disagreement, that my reading is correct, that I know what he thinks now, etc… but I do think it is potentially an interesting passage in the context of this discussion.)

    November 24, 2013 — 19:02
  • Leave a Reply to Matthew Mullins Cancel reply

    Your email address will not be published. Required fields are marked *