ISSA Proceedings 2014 – Cognitive Biases And Logical Fallacies
Abstract: Cognitive biases indentified in psychology are indications of imperfect reasonableness of human minds. A person affected by a cognitive bias will reason wrongly without realizing it. Argumentation theory should take the findings of cognitive psychology into consideration for two main reasons. First, the biases registered by psychologists will help create a more comprehensive inventory of fallacious reasoning patterns. Second, some cognitive biases may help explain why a person is reasoning fallaciously.
Keywords: cognitive biases, fallacious reasoning patterns, psychology, unreasonableness.
1. Introduction
We know that a speaker may use some of the reasoning patterns called fallacies in order to manipulate her opponent, or to mislead the audience present at the discussion. For instance, an illegitimate appeal to the expert’s status or a straw man can be used as purely sophistical devices that presumably may help the speaker win the debate. We also know that a person can reason fallaciously without realizing that she’s actually doing so. For instance, she may be affirming the consequent or using an undistributed middle term in a syllogism while not realizing that she is, in fact, committing a logical fallacy. In such cases we usually put it down to poor logic in the reasoner. However, with the help of a few examples I’ll show that some reasoning errors are committed not because the arguer’s mind lacks in logic, but because it is abundant in psycho-logic. As the human mind is a multifaceted structure, our choice of argumentation patterns can be determined not only by logic – or lack of it – but also by our psychology. In other words, I want to argue that if a speaker is reasoning wrongly it may be not because of bad intent, and not because his logical machine breaks down, but because his psychological machine is in gear.
Cognitive psychology identifies several dozen cognitive biases, which are “replicable pattern(s) in perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality[i]”. (This and other quotations that describe cognitive biases below are taken from the Wikipedia list of cognitive biases; see Footnote 2.) I think argumentation scholars should incorporate these findings of cognitive psychology into their research for several reasons. First, some cognitive biases resonate well with the logical fallacies argumentation theorists know of, and to no surprise: both are, in fact, improper reasoning patterns that occur systematically. Moreover, references to such reasoning patterns as wishful thinking, gambler’s fallacy, Texas sharpshooter fallacy, bandwagon effect, and some others can be found both in the lists of cognitive biases and in the lists of fallacies[ii]. However, logic and psychology look at bad reasoning from different angles, and even though many cognitive biases are somehow related to the fallacies we are familiar with, the descriptions of the former can sometimes give a wider perspective on, and/or a deeper insight into the reasoning patterns argumentation scholars are accustomed to being aware of.
A second reason why more should be learnt about cognitive biases is that one must realize that a person may well be expressing a biased view without ever wanting it to be biased, without even knowing that it is biased. Walton, Reed and Macagno describe an argumentation scheme called ‘argument from bias’ (Walton et. al., 2008, pp. 154-169), but when talking about this scheme the authors seem to use the word ‘bias’ – which is admittedly an ambiguous word – to mean a conscious, intentional bias, as in the phrase ‘institutional bias’, for example. However, if an arguer is affected by a cognitive bias, she will commit a fallacy unconsciously and unintentionally. I think it is important to distinguish between appeals to conscious and unconscious bias because a) such appeals will have different rhetorical functions and b) dialectical evaluation and methods of criticism of arguments from conscious and unconscious bias will also be different.
Apart from this, learning more about cognitive biases will have some pedagogical implications. If one goes deep into the subject, she will probably see that it is not enough to teach her students logic, rhetoric, and dialectic if she wants to make good reasoners of them. She may want to teach them some cognitive psychology as well. In my opinion, argumentation theory must join forces with psychology in discovering how judgments are formed in the human mind.
2. Examples of cognitive biases consonant with logical fallacies
In this section I will provide several examples of cognitive biases that are consonant with logical fallacies. Logicians register the Fallacy fallacy when the conclusion of an argument is claimed to be false on the grounds that the argument in its support is fallacious. Psychologists, in their turn, register the Belief bias – “an effect where someone’s evaluation of the logical strength of an argument is biased by the believability of the conclusion” (see, for example, Stupple et. al., 2011). These two reasoning patterns are opposites of each other. In other words, they appear to be mirror reflections of one another. However, psychologists don’t know about the Fallacy fallacy while logicians are unaware of the Belief bias. At the same time, it can’t be denied that we often encounter this fallacious reasoning pattern in everyday communication: ‘This one is a good argument because it supports the conclusion I endorse’. Thus, a piece of knowledge generated in the field of cognitive psychology can evidently be of use to an argumentation theorist compiling a list of logical fallacies.
Logicians also know of the cherry-picking (or suppressed evidence) fallacy while psychologists point out to the Confirmation bias – “the tendency to search for or interpret information … in a way that confirms one’s preconceptions” (see, for example, Lewicka, 1998). When an argument critic accuses someone of cherry-picking he means that his opponent (or collocutor) intentionally selects the evidence that supports her conclusion while intentionally ignoring (suppressing) the evidence that contradicts it. (Cherry-picking is a common argumentative tactic among linguists, for example. They put forward a general claim about some aspect of the language and then give examples of language use that support this claim. In so doing, they often suppress counterexamples that would undermine the claim. This argumentative strategy is rightly regarded as a way of cheating.) However, with the help of some experiments psychologists show that a person can indeed be selective in providing evidence for a claim without ever knowing she is being selective. In other words, this person can be ‘honestly in error’. The imperfection of our cognitive apparatus may be causing the errors in our reasoning – not bad intentions.
A few more examples of consonance between cognitive biases and logical fallacies – in even less detail. There’s a bias called ‘Anchoring effect’ – “a common human tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions” (see, for example, Strack & Mussweiler, 1997). There’s also a ‘Halo effect’ – “a cognitive bias whereby the perception of one trait (i.e. a characteristic of a person or object) is influenced by the perception of another trait (or several traits) of that person or object” (see, for example, Nisbett & Wilson, 1977). Both of these biases are related to the part/whole fallacies but they take these patterns of reasoning at a different angle: rather than explaining their fallaciousness by absence of logic they explain it by natural presence of psycho-logic in the human mind. Besides, the Anchoring effect can help account for the persuasiveness of the ‘Outstanding example’ fallacy. Thus, knowledge in cognitive psychology may sometimes allow the argumentation theorist to understand why certain fallacies can be effective persuasive devices.
The Hindsight bias, sometimes called the ‘I-knew-it-all-along’ effect, – the tendency to see past events as being predictable at the time those events happened (see, for example, Mazzoni & Vannucci, 2007) – is consonant with the Historian’s fallacy in the sense that the reasoner relies on the knowledge she has in the present when judging about events that took place in the past. Stereotyping – “expecting a member of a group to have certain characteristics without having actual information about that individual” (see, for example, Judd & Park, 1993) – has a lot in common with inductive fallacies such as hasty generalization. The Projection bias – “the tendency to unconsciously assume that others share one’s emotional states, thoughts and values” (see, for example, Sheppard), and the False consensus effect – “the tendency for people to overestimate the degree to which others agree with them” (see, for example, Gilovich, 1990), are both consonant with, though a bit different from, the Psychologist’s fallacy. There are other examples of this kind too, and if argumentation theorists read more about cognitive biases they will be able to enlarge the existing lists of fallacious reasoning patterns and have a better understanding of why logically bad reasoning can at times be persuasive.
3. Conscious vs. Unconscious bias
In this section I will discuss the differences in rhetorical functions and dialectical roles of appeals to conscious and unconscious bias that the speaker’s opponent in a critical discussion may appear to have. In their monograph Argumentation Schemes Walton, Reed, and Macagno describe a ‘bias ad hominem’ scheme that is formalized as follows:
Premise 1: Person a, the proponent of argument ∂ is biased.
Premise 2: Person a’s bias is a failure to honestly take part in a type of dialog D, that ∂ is a part of.
Premise 3: Therefore, a is a morally bad person.
Conclusion: Therefore, ∂ should not be given as much credibility as is would have without the bias (Walton et. al., 2008, p. 338).
When discussing arguments from bias (ibid., pp. 154-169) the authors seem to have in mind appeals to conscious bias only. It is true that a person may have this kind of bias – her institutional position, social status, or association with a certain group can be making her reason in a prejudiced way. The following anecdote will serve as an illustration of a communicative situation where an appeal to conscious bias would be justified, in my opinion.
Once I was accompanying a Swedish ecologist, Lars, to a meeting with the Irkutsk aluminium plant administration. They spread on the table for us the wind rose (wind direction map) for the area, and it quite expectedly showed that the major winds blew away from the city and, therefore, brought no pollution from the plant to it. When Lars and I discussed the meeting afterwards, we both were very skeptical about the trustworthiness of the wind rose we’d seen: we knew the plant administrators couldn’t have shown us anything different. We doubted the honesty of our interlocutors. We knew they could have fabricated the data to deceive us.
Now let us imagine that meeting was held in public. Suppose Lars would say to the plant bosses: ‘Of course your wind rose shows what it shows: as company administrators you could never publicly admit that the plant is polluting the air in the city’. For the audience present this utterance would probably constitute a cause to doubt the administrator’s sincerity. For the administrators themselves it would probably constitute a cause for some irritation: they would be angry with the Swede because he’s shaken the audience’s trust to them. Such an appeal to bias is of course an ad hominem argument and without doubt it contains an attack on the opponent’s moral qualities. I must note here that Douglas Walton insists that any ad hominem argument must contain a premise (or a sub-conclusion) ‘arguer a is a morally bad person’, because in any of its disguises this argument is some kind of attack on the opponent’s personality. I disagree with this proposition for the reasons given below.
Let us consider a different situation. Imagine that I put forward a hypothesis and cite some data that confirm it. Suppose now that my collocutor brings forward some evidence that clearly contradicts my hypothesis. What should my reaction be? Should I feel angry with him? Not at all! I will realize that because I liked my hypothesis so much, because I so much wished it to be true, I got blinded by the confirmation bias. So instead of being angry I’ll be grateful to my collocutor because he’s rescued me from a potentially erroneous conclusion. And how would the audience react should they be present at this exchange? If in the course of a public discussion one of the participants manages to show that her opponent is unknowingly biased, this should not evoke suspicions about his moral qualities. Instead, the audience would probably pity the poor lad, and thus even develop some sympathy to him. Haven’t you ever felt pity for a colleague who is wrong but he just can’t see it?
I’d like to stress that the basic logical structure of arguments from bias will always be the same, no matter if it is an appeal to conscious or unconscious bias. Such arguments will always remain ad hominem arguments showing that the opponent’s view (or his argumentation) is one-sided. However, appeals to conscious and unconscious bias are different in at least three other respects. First, an appeal to conscious bias is a hostile move in the sense that it is often used to raise suspicions about the opponent’s sincerity or honesty. Therefore, formalization of this argument must have ‘arguer a is a morally bad person’ as one of the premises. On the other hand, an appeal to unconscious bias is a friendly move as it doesn’t attack the opponent’s personality. Instead it is an act of charity because it can save the opponent from an erroneous conclusion by showing that his psychology is playing a trick on him and making him reason wrongly. Even though such argument will still be an ad hominem, its formalization should not have ‘arguer a is a morally bad person’ among the premises. The second difference is that the addressee of an appeal to conscious bias will most probably be annoyed with this argument, while the addressee of an appeal to unconscious bias should be grateful to his interlocutor. The reaction of the addressee is important to consider because the discussion may take two drastically different routes: it will probably become antagonistic in the first case and cooperative in the second. Finally, in a public discussion, appeals to conscious and unconscious bias will evoke different feelings in the hearts of the audience: it may grow distrustful to the argument addressee in the first case and sympathetic in the second. It goes without saying that one has to bear in mind the differences in the rhetorical functions of different arguments. An appeal to a conscious bias may be instrumental in winning the discussion while an appeal to a cognitive bias can be helpful in arriving at the right conclusion as the result of the discussion.
4. Conclusion
I think the most important lesson an argumentation scholar can learn by studying cognitive biases is that the human mind is only imperfectly reasonable. We can be logical, yes. But at times we can also be psychological. If my reasoning is poor, it may be not because I lack certain skills or abilities but because my thinking process is distorted by some inherent, natural features of my mental organization. By pointing out numerous instances of unreasonable human behavior, cognitive psychology does us all great service: we now can get rid of our illusions about the maximum achievable amount of reasonableness in humans.
Reading about cognitive biases will have some pedagogical implications, too. To make good reasoners of our students it is apparently not enough to teach them logic, rhetoric, and dialectic. When we talk to them about fallacies, we do it for a clear purpose: we hope that they will try to avoid bad reasoning patterns when making up their own arguments, and that they will be able to spot such patterns in the reasoning of others. I believe we must talk to students about cognitive biases for exactly the same purpose. If a person is aware that her mental apparatus is liable to malfunctions of certain types, she will be better armed against falling into traps of her own psychology. Of course, self-reflection is not an easy task and neither is psychological analysis of others. And of course, cognitive biases are not tangible or measurable things, not even clear-cut notions. But replicable experiments show that some such biases do exist and knowing about them will certainly help account for the causes of poor reasoning in some instances. Moreover, it may help eliminate these causes thus improving the overall quality of reasoning.
In conclusion I’d like to reemphasize why I believe argumentation scholars should take into consideration the relevant research in cognitive psychology. First of all, we must remember the fact that the human mind is not only logical but psychological too. Some of the unreasonable actions people carry out result not from their poor logic but from their rich psychology. To be frank, I’ve always felt that some reasoning patterns described as informal fallacies are rooted in the human psychology. Take wishful thinking, for example. In my opinion, it is wrong to say that the utterance ‘I want it to be true, therefore, it is true’ lacks logic. It is so transparently anti-logical that the apparatus of logic is simply inadequate for its interpretation. Instead, a reference to the psychological ‘side’ of the human mind can explain how a reasoning pattern so appallingly illogical may exist at all: it comforts me to think it’s true, therefore, I will think it’s true. Or take the Bandwagon fallacy: ‘Everybody believes it (or does it), therefore, I must believe it (or do it) too’. That’s psychology at work, or herd instinct maybe, but it’s not a logical breakdown. If argumentation theorists know more about cognitive biases they will be better equipped to say why a person is reasoning – and behaving – wrongly.
Besides, learning about the biases will help compile a more comprehensive inventory of fallacious reasoning patterns. Some cognitive biases are formalizable in the same fashion Douglas Walton and some other authors formalize argument schemes. For instance, the reasoning pattern affected by the Belief bias can be formalized as follows: “I share proposition p that argument A supports; therefore, A is a good argument”. The Anchoring effect can have the following form: Object O has property P; P has positive (negative) value; therefore, O has positive (negative) value.
Other reasoning schemes affected by cognitive biases are apparently more difficult to formalize (the Confirmation or the Hindsight bias, for example), but I’m sure theorists can find ways to deal with such instances too. In any case, if they study cognitive biases and compare them to the fallacies they know, they will have a better chance to understand how the healthy brain may malfunction.
Of course one shouldn’t forget that psychology is a purely argumentative science in the sense that all the conclusions psychologists make are liable to refutation. Indeed, when reading about cognitive biases I failed to be convinced by some arguments that I found. Well, after all, psychologists are liable to the Psychologist’s fallacy by definition. Besides, there are controversies among cognitive psychologists about the existence and classification of many biases just the way there are controversies among argumentation theorists about the fallacies. So caution must be taken when analyzing what psychology has to say. At the same time, no-one is in a better position to evaluate the quality of arguments than scholars of argumentation.
NOTES
i. Although I’d prefer the word “unreasonableness” as I like to preserve the word “rationality” to talk about mathematical, abstract thinking which I’m not talking about here.
ii. For example, compare these two lists from Wikipedia: http://en.wikipedia.org/wiki/List_of_cognitive_biases and http://en.wikipedia.org/wiki/List_of_fallacies
References
Gilovich, T. (1990). Differential construal and the false-consensus effect. Journal of Personality and Social Psychology, 59 (4), 623–634.
Judd, C. M., Park, B. (1993). Definition and assessment of accuracy in social stereotypes. Psychological Review, 100 (1), 109–128.
Lewicka, M. (1998). Confirmation bias: Cognitive error or adaptive strategy of action control? In M. Kofta, G. Weary & G. Sedek (Eds.), Personal Control in Action: Cognitive and Motivational Mechanisms (pp. 233–255). Springer.
Mazzoni, G., & Vannucci, M. (2007). Hindsight bias, the misinformation effect, and false autobiographical memories. Social Cognition, 25 (1), 203-220.
Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35 (4), 250–256.
Sheppard, S. Basic psychological mechanisms: Neurosis and projection. The Heretical Press. Retrieved on May 03, 2010.
Strack, F., & Mussweiler, T. (1997). Explaining the enigmatic anchoring effect: Mechanisms of selective accessibility. Journal of Personality and Social Psychology, 73 (3), 437-446.
Stupple, E.J.N., Ball, L. J., Evans, J. St. B. T., & Kamal-Smith, E. (2011). When logic and belief collide: Individual differences in reasoning times support a selective processing model. Journal of Cognitive Psychology, 23 (8), 931–941.
Walton, D., Reed, C. & Macagno, F. (2008). Argumentation schemes. Cambridge: Cambridge University Press.