ISSA Proceedings 2002 – Reversing Perceptions Of Probability Through Self-Referential Argument: Interpretation And Analysis Of Protagoras’ Stronger/Weaker Fragment

logo  2002-1The ancient sophists were accused of teaching how to make the worse argument the better. A key historical text that records this accusation is Protagoras’ ‘weaker/stronger’ fragment. This fragment occurs in chapter twenty-four of the second book of Aristotle’s Rhetoric in the context of a list of fallacious syllogisms used by sophists. Richard McKeon (1941), in his edition of Aristotle, translates it as ‘making the worse argument seem the better’. The original meaning of this fragment has been the subject of debate among scholars of the history of rhetoric. Traditionally it has been taken to mean that sophists made logically inferior arguments look logically superior, but a revisionary understanding of this fragment offered by Edward Schiappa (1991) asserts that it meant that the sophists improved the ‘weak’ arguments of the Athenian underclasses. In this presentation I will offer a new interpretation that is better founded in the context in which Aristotle cites Protagoras. The stronger/weaker fragment is actually referring to a particular kind of self-referential argument. I will explain how these arguments work, offer a critique of Aristotle’s critique of them, explore the peculiar conditions of their validity as well as their relation to the everyday logic of prejudice and stereotype.

1. Making the worse argument better: history and interpretation
Schiappa (1991: 103-116) is one of the most recent interpreters of the weaker/stronger fragment. In his discussion of it he has made two points. First, the ‘seem’ is spurious, not in the original text but added by McKeon in the translation. Second, the words translated as ‘worse’ and ‘better’, hetto and kreitto, are more accurately translated as ‘weaker’ and ‘stronger’. More importantly, Schiappa ultimately interprets the fragment in the context of Aristophanes’ play, Clouds, where two logoi (arguments or discourses) are personified. One is characterized as  kreitto and allied to traditional Homeric values of honor and noble birth. The other is characterized as hetto and allied to ‘rational argument’ and ‘agnosticism’. Schiappa, completely dismissing Aristotle’s interpretation as prejudiced, takes the Clouds’ dialogue as evidence that Protagoras was interested in helping the weak and downtrodden become strong and displace the old order. This may or may not be true, but I do not believe that Aristotle should be dismissed without an explanation of how and why he misinterpreted Protagoras’ argument.

To unravel the meaning of this fragment then, we should begin by quoting it in its whole context. Here is the McKeon translation of Rhetoric 24, 1402a 3-27:
Again, a spurious syllogism may, as in ‘eristical’ discussions, be based on the confusion of the absolute with that which is not absolute but particular. As, in dialectic, for instance, it may be argued that the what-is-not is, on the grounds that the what-is-not is what-is-not; or that the unknown can be known, on the grounds that it can be known to be unknown: so also in rhetoric a spurious Enthymeme may be based on the confusion of some particular probability with absolute probability.
Now no particular probability is universally probable: as Agathon says,

One might perchance say that this was probable –
That things improbable oft will hap to men.

For what is improbable does happen, and therefore it is probable that improbable things will happen. Granted this, one might argue that ‘what is improbable is probable’. But this is not true absolutely. As, in eristic, the imposture comes from not adding any clause specifying relationship or reference of manner; so here it arises because the probability in question is not general but specific. It is of this line of argument that Corax’s Art of Rhetoric is composed. If the accused is not open to the charge – for instance if a weakling be tried for violent assault – the defense is that he was not likely to do such a thing. But if he is open to the charge – i.e. if he is a strong man – the defense is that he is still not likely to do such a thing, since he could be sure that people would think that he was likely to do it. And so with any other charge: the accused must either be open to it or not open to it: there is in either case an appearance of probable innocence, but whereas in the latter case the probability is genuine, in the former it can only be asserted in the special case mentioned. This sort of argument illustrates what is meant by making the worse argument seem the better. Hence people were right in objecting to the training Protagoras undertook to give them. It was a fraud; the probability it handled was not genuine but spurious, and has a place in no art except rhetoric and eristic. (1402a3-27)

It would seem that when Aristotle talks about Protagoras’ practice of making the weak argument strong he has in mind something far more specific than making a good argument bad or championing the cause of the downtrodden lower classes. In the quoted passage Aristotle is objecting specifically to the practice of making the probable seem improbable on the grounds that there is a difference between particular and absolute probability. Perhaps this kind of probability argument  is specifically what Aristotle believe Protagoras to be doing in ‘making the weaker argument stronger’.

Let us take a closer look at the logic of the weaker/stronger argument that Aristotle is criticizing. There are two interpretive levels in the quotation from Aristotle.  On the first level Aristotle provides an example of an argument, what we might call the ‘strong man argument’:
If the accused is not open to the charge – for instance if a weakling be tried for violent assault – the defense is that he was not likely to do such a thing. But if he is open to the charge – i.e., if he is a strong man – the defense is that he is still not likely to do such a thing, since he could be sure that people would think that he was likely to do it.

On the second level, he offers criticism and interpretation of the first level argument. Let us leave to one side for the moment Aristotle’s second level commentary, and along with it the question of whether Aristotle is justified in making it, and focus on providing a fuller description of the first order argument.

Like the famous paradox of the Cretan Epimenides, who said ‘All Cretans are liars’ the strong man argument is self-referential. As the sentence ‘All Cretans are liars’, when spoken by a Cretan, produces a paradox by obliquely referring to itself, so the strong man argument attempts to alter an audience’s perception of what is probable by using the conclusion ‘It is probable that this strong man committed this crime that could only have been committed by a strong man’ as the most important premise in its own counter-argument. Important structural differences exist between the liar paradox and the strong man argument, differences that we will explore in a moment, but in both there are conditions of intelligibility that have consequences that contradict those conditions upon which they are contingent. ‘All Creatans are liars’ is only intelligible as a true sentence or as a false sentence, but the consequence of its being true is that it is false, and the consequence of its being false is that it is true. Similarly, a given set of circumstantial evidence is intelligible as making it probable or improbable that Smith killed Jones, but if the evidence is understood as indicating that Smith is probably guilty, then this itself counts as a reason that he is probably not. In both cases the ‘then’ of an ‘if…then…’ statement refers back to the ‘if’ and contradicts it. These are conditionals at war with themselves.

Self-referential argument is very much a part of the sophistic tradition, a fact which lends credence to my interpretation of the weaker/stronger fragment. Self-referential argument can be found in other fragments of the early rhetorical tradition associated with the figures of Protagoras, Corax, and Tisias. Diogenes Laertius (9.56) reports that Euathlus, a student of Protagoras, refused to pay the fee he had agreed to give Protagoras for teaching him how to argue in court, complaining that he had not yet won a courtroom victory.  They went to court to settle the matter. There Protagoras argued that, win or lose, he should be paid by his student because, ‘If I win this dispute I must be paid because I’ve won, and if you win I must be paid because you’ve won your first case’. This story is probably a spurious reworking of the earlier story of Corax and Tisias story, the legendary Sicilians who were supposed to have been the first to teach rhetoric, which itself is likely to be a fiction. In the Corax and Tisias story, Corax, the teacher, argues as Protagoras does here, but Tisias, the student, argues that if he wins he should not have to pay because he’s won, and if he loses he should not have to pay because he still has not yet won (Schiappa 1991: 215). The historical factuality of the incidents is not important here. What is obvious is that these are teaching stories that have deep roots in the rhetorical tradition. These arguments have the same self-referential form that Aristotle cites in reference to Corax and Protagoras and exemplifies with the strong man argument.

To this evidence add a passage from Plato’s Phaedrus which criticizes Tisias’ use an argument about a strong man that bears more than a passing resemblance to Aristotle’s example of a weaker/stronger argument.

Socrates: Very well, then, take Tisias himself; you have thumbed him carefully, so let Tisias tell us this. Does he maintain that the probable is anything other than that which commends itself to the multitude?
Phaedrus: How could it be anything else?
Socrates: Then in consequence, it would seem, of that profound scientific discovery he laid down that if a weak but brave man is arrested for assaulting a strong but cowardly one, whom he has robbed of his cloak or some other garment, neither of them ought to state the true facts; the coward should say that the brave man did not assault him singlehanded, and the brave man should contend that there were only two of them, and then have recourse to the famous plea, ‘How could a little fellow like me have attacked a big fellow like him?’ (273 a-c)

In all cases, the argument cites the contingency of its own failure as a ground for its success. This is truly turning the weak argument into a strong one, one that is paradoxically strong because it is weak. What could better affirm Protagoras’ assertion that for every argument there is a counter-argument? Given all this, it seems probable that the weaker/stronger fragment does refer to a kind of self-referential argument. If this is accepted, the next question is whether Aristotle is justified in his criticism of the strong man argument. To answer this question we will need to venture into the still largely uncharted territory that lays between logic and psychology.

2. Is the strong man argument valid?
Self-referential paradoxes have been the agitant for some of the biggest and most enduring headaches of analytic philosophy in the twentieth century. Are they ever valid, and if so under what circumstances? In this context, one must mention Bertrand Russell’s paradox and the theory of logical types, found in the Principia Mathematica (Whitehead & Russell1910), which is designed to solve it. Russell’s paradox, as simplified and explained by Ernest Nagel and James Newman (1960), runs as follows:
Classes seem to be of two kinds: those which do not contain themselves as members, and those which do. A class will be called ‘normal’ if, and only if, it does not contain itself as a member; otherwise it will be called ‘non-normal’. An example of a normal class is the class of mathematicians, for patently this class itself is not a mathematician. An example of a non-normal class is the class of all thinkable things; for the class of all thinkable things is itself thinkable and is therefore a member of itself.
Let >N= by definition stand for the class of all normal classes. We ask whether N itself is a normal class.  If N is normal, it contains itself (for by definition N contains all normal classes); but, in that case, N is non-normal, because by definition a class that contains itself as a member is non-normal (24).

Russell, as a logician, declares that this apparent paradox occurs because of a confusion of logical types: one can never include a class within a class of individuals.  For example, the class of dogs can never be included in a set that also includes individual dogs like Spot, Rex, and Ginger. ‘Dogs’ is of a different logical type than ‘Spot’. There are no non-normal classes. It is illegitimate for the class of all thinkable things to include individuals and classes together without hierarchal distinction. Even if there were non-normal classes, the same logic dictates that one can not include a class of classes like ‘N’ in a class of classes that are classes of individuals. That’s like putting ‘dogs’ in with Rover and Ginger, but raised by one power.

The theory of logical types places certain limits on self-reference. An individual can refer to itself, but a class can not, through self-reference, include itself as an individual within itself. By the same token, a class of classes can not by self-reference include itself as one of the classes within itself, which is more to the point in unraveling Russell’s paradox.

One might be tempted to think that the strong man argument unravels in a way that is similar to Russell’s paradox and that Aristotle’s claim that it confuses absolute probability with particular probability is valid and in fact a very early articulation of the theory of logical types. And this is most likely true for the part of the argument that refutes the assertion that the probable is the improbable. But we should be more careful with the strong man argument itself. After attempting to use the theory of logical types as a basis for his own theory of framing, the anthropologist Gregory Bateson (1972: 177-193) came to the conclusion that logical types are not in fact a very good model of human communication. ‘It would be bad natural history to expect the mental processes and communicative habits of mammals to conform to the logicians’ ideal’ (180).
We violate the theory of logical types every time a discussion of the rules of a game become part of the game itself – a predicament which is the essence of a certain kind of politics, for example, when politicians debate how to redraw the districts which they represent.

In order to more accurately describe play, politics, schizophrenia and other complex mammalian behavior and misbehavior, Bateson formulated a theory of psychological frames, a theory which has proved to be influential in American communication studies, inspiring Erving Goffman’s Frame Analysis (1974), the concept of metacommunication formulated by Watzlawick, et al (1967), the recognition of the argumentative tactic that Herb Simons called  ‘going meta’ (1994), and the widely disseminated general concept of framing. Although Bateson’s theory of psychological frames was inspired by Russell’s theory of logical types, Bateson pointed out that there are some important differences between the logical and the psychological. Logical types are transitive: If A is greater than B, and B is greater than C, then A is greater than C. It is because of their transitivity that logical types can not be haphazardly transposed. But psychological frames are intransitive, just because A frames B, and B frames C doesn’t mean that C can’t then frame A. This kind of thinking is sometimes nothing more than an empty logical circularity, as in the textbook example of circular reasoning: ‘Everything that the Bible says is true because God wrote it. It is true that God wrote the Bible because it says so in the Bible’. But there are certain times when such thinking is valid, if not logically valid, then psychologically valid.

These patterns of circular logic reflect the inherent circularity of reflective thought. ‘I am thinking that…’ is an act of self-reference which generates valid circularity, the kind of circularity that is at the heart of the strong man problem. If one carefully considers the situation of the strong man one must concede that it is at least possible that, if the strong man engages in self-reflection, the fact of his great strength might actually figure as a reason for him to be extra careful about abusing his strength. And a plausible defense is that this reflective strong man would not be stupid enough to do something of which others would so readily suspect him. Thus our quick suspicion of him can count as a reason that we should be less suspicious of him. The reason that this is in fact a valid type of argument is that human beings are reflective and reactive creatures in a way that Russell’s classes of classes are not[i]. Self-reference is built into thought, and the realization that a certain course of action is probable can change the probability of that course of action. This defense is not possible for the liar paradox, which is a paradox of self-reference but does not turn on the probability of a course of action. But because of the inherent self-referentiality of human thought, reframings of probable courses of action have a special intransitive logic: one of the pieces of evidence that can count for or against the probability that a reflective human will do something can in fact be a conclusion about the probability of her doing it.

Obviously, the self-referential logic of reflexive psychological frames can become circular, but it is a circularity which we so in fact often live. More than thirty years ago, the American psychologist R.D. Laing charted, in free verse form, the baroquely pathological twistings of human logic loops in his rich but scary little book, Knots (1961). It is full of little nuggets like the following:

They are playing a game. They are playing at not
playing a game. If I show them I see they are, I
shall break the rules and they will punish me.
I must play their game, of not seeing I see the game. (1)

Or then again,

Jack feels Jill is devouring him.

He is devoured
by his devouring fear of
being devoured by
her devouring desire
for him to devour her.

He feels she is eating him
by her demand to be eaten by him. (16-17)

And lest we forget that we are also in the land of self-fulfilling prophecies:
Jack frightens Jill he will leave her
because he is frightened she will leave him. (14)

Laing reminds us of how often convoluted cycles of self-reference are at play beneath the surface of intimate relations, not to mention the trading on Wall Street, global power politics, and the edicts of bureaucrats. Joseph Heller’s famous catch 22, after all, also has the form of a self-contradictory self-reference. Furthermore, if some unscrupulous editor surreptitiously spirited the following rendering of the strong man argument into a 2005 edition of Knots, it’s unlikely that anyone would catch on:

A is either likely or unlikely to have committed crime X.
If A is unlikely to have done it, then A is likely to be innocent.
If A is likely to have done it, then A would realize he would be suspected of X.
If A knew he would be suspected of X, A is unlikely to have committed X.
Therefore, A is unlikely to have committed X whether he was likely to have committed it or not.

Of course, one could add that A, knowing that because everyone would see that it would be unlikely for someone so likely to commit crime X to actually commit it, would be likely to take advantage of the situation and commit the crime that he was thought to be so unlikely to commit because he was so likely to commit it. Once you begin one of these cycles of reflexive reframing, no outcome is final. Another level is always theoretically possible, although as a practical matter the human mind has trouble functioning beyond level four or so.

The upshot of all this is that Aristotle is not justified in his criticism of the strong man argument. In fact, the critical section of the quoted passage does not hold up well at all. The argument ‘What-is-not is what-is-not so not being is being’ fails for a different reason than the argument about something improbable probably happening. As is well known, the argument about being fails because it does not distinguish between the existential ‘is’ and ‘is’ as a logical copula (Ackrill 1971). Aristotle gives an adequate account of the failure of the probability argument. It does not follow from the observation that something improbable will probably happen that the improbable is probable, and his distinction between particular and absolute probability might well presage the theory of logical types. But the strong man argument can not be tarred with the same brush, being protected by the special consideration due to human reflection, circular though it may be.
This is not to say that there are no criticisms which Aristotle might have leveled at the strong man argument. It is a trick argument with specific limits to its validity, and these limits are to be explored in the next section.

3. Limits to the validity of strong man arguments
Firstly, it must be admitted that I have been using the term ‘probable’ in a very suspicious way. Consider the following strong man modification of the Epimenides paradox: Epimenides the Cretan liar says, ‘Being a Cretan, it is not probable that I would lie, for I know because everyone suspects me of lying I would be found out’. But if all Cretans thought like Epimenides then it is improbable that any particular Cretan would lie, and this would invalidate the foundational premise that Cretans are liars. The only way around this problem is to recognize that we are not dealing with a statistical kind of probability that has predictive value. Probability must be understood in its ancient sense for the strong man argument to be valid. The ancient sense of probability belongs to the logic of reputation, stereotype, and prejudice.

To make clear the proper way of reading probability in ancient Greek texts, let me digress briefly to say a few words about the Greek word that is translated as probability, eikos. Eikos should not be understood as probability in the modern statistical sense. A better translation is ‘likely’, for the meaning of the root eoika is ‘to be like or similar’. We take our word ‘icon’ from it. Both eikos and the English work ‘likely’, in fact, work in the same way, using similitude to indicate ‘probability’[ii]. How does ‘like’ come to stand for ‘likely’? Actually, the relationship between the English words ‘like’ and ‘likely’ offers a hint. In judging the truth of a picture, one might look closely at how much it resembled the objects which it represented. If one found a picture like one remembered the objects pictured, then one would find the events portrayed to have been likely to have occurred. In an analogous way, an argument that Smith killed Jones would seem likely if Smith was depicted in a way that was like a stereotypical image of a  murderer held by members of the audience.  The likely was a fit between two sets of appearances, those presented in the argument and those in the experience of the audience. In this way, in argument as in painting, the like became the likely.

It is this sense of the probable as a likeness between an individual and a stereotype, a sense that is still much more operational in today’s world than teachers of critical reasoning would like, that we must bring to our understanding of the strong man argument. To say that a Cretan will probably lie does not mean that 9 out of 10 Cretans will lie when asked a particular question, it means that Cretans are like the stereotypical Cretan, who is a liar, and so therefore it is likely that they will lie. The stereotype provides an unassailable foundation upon which loops of self-referential logic can grow, preventing fatal contradiction in much the same way that Russell’s logical types do. When we read probability in this sense, Epimenides’ thought process would run as follows: Epimenides knows that Cretans are and always will be thought to be liars and that no action of his can change that. He knows that people will expect him to act like the stereotypical Cretan and lie. Therefore, he takes extra care to deal honestly with people, as he knows that everything he says will be checked up on. The probability we are dealing with here belongs not to the logic of the weather report, but rather to the logic of prejudice.
The second limitation on the validity of the strong man argument follows from the first. Because we are dealing in the logic of stereotype, strong man arguments can only validly occur in situations where an individual is aware of and cares about what others think of him or her. If Epimenides didn’t care about what others thought about his veracity, there would be no reason for him to take extra care to tell the truth. If he lied and was caught, it simply wouldn’t matter to him. Strong man loops only begin to occur in situations where an individual is contemplating how he or she appears to others.
The third limitation on the validity of the strong man argument is this: strong man arguments only apply to conditions that can be willfully brought about or avoided. If all Cretans were pathologically compulsive liars who couldn’t tell the truth even if they wanted to, then a strong man type argument would be irrelevant. Epimenides wouldn’t be able to tell the truth, even if he knew that everyone knew that he was lying.

The final limitation is that, although valid under certain circumstances, strong man arguments have no predictive value whatsoever. They function more as rationalization than reason, potentially valid but never entirely sound. This is true not only because of the nature of the ancient sense of probability, but also because the logic can reverse itself ad infinitum. Epimenides might tell a lie and try to convince those he told it to that he would never lie because he knew that, as a Cretan, he could never get away with it. Even though this is about the practical limit of reversal (any further reversal would strain both the understanding and credibility) it is enough to ensure for every self-reference there is a counter self-reference. To this uncertainty must be added the uncertainty about the exact way in which a situation is made to refer back to itself. Consider that Epimenides might, despite being a Cretan, be an honest man at heart. But he might decide that because he is a Cretan and no one will believe him anyway, he might as well tell lies. The argument that Epimenides, despite being honest, can not help but tell lies because he is a Cretan is every bit as ‘probable’ as the argument that Epimenides, although basically dishonest, would not dare tell lies because he is a Cretan. And of course each of these arguments can be reversed by moving to a higher level of reflexivity.

The fact that the conclusions of strong man type arguments can be reversed so many times in so many ways means that, in the final analysis, a wise individual must make decisions about what to do based on considerations beyond the mirror-play of appearance. An evening of reflection would demonstrate to Epimenides that he can generate good reflexive arguments both for and against lying. To decide whether truth or falsehood would come out of his mouth the next day, he would need settle his mind that the truth is intrinsically valuable, whether anyone else believes he is telling the truth, thus removing the condition of caring about appearances necessary to generate the strong man loop. If self-referential arguments were indeed prevalent in the courtrooms of Plato’s day, the necessity of finding argumentative considerations independent of appearance might have been one of the pressures that caused Plato to gravitate towards his system based on forms beyond appearance.

4. Conclusion
In this essay I have made the case that when it was said that Protagoras could make the weaker argument stronger what was being referred to was a method of self-referential argument used by Protagoras and other sophists. Because of the self-referential nature of human thought, these self-referential arguments can have a certain kind of validity. Schiappa’s ultimate assessment that weaker/stronger arguments worked on behalf of the Athenian underclasses might not have been that far off in the sense that these arguments were useful in turning the prejudicial logic of eikos against itself. All one has to do is go back to some of the working examples and substitute ‘gypsy’ for ‘strong’ and ‘black’ for ‘Cretan’ to get the idea: ‘Because I am a gypsy, they will think that I am I have stolen the money. But because I knew that they would suspect this, I would not have done it’. But unfortunately the nature of strong man arguments makes possible infinite reversals, which can just as easily serve prejudice as oppose it. Still, exploration of these arguments gives us more than a new insight into a fairly obscure fragment from an ancient sophist, it gives us a Laingian insight into the tortured logic that prejudice imposes on its objects.

NOTES
[i] Even in the field of phlosophy and pure maghematics the theory of logical yypes has not escaped modification and challenge. See Quine (1970)
[ii] The advent of eikos taking on its sense of ‘probability’ was a fairly late occurrence. Eikos does not occur in Homer. A similar word, eikuia, is used, but always to designate resemblance.  The probability meaning of eikos is also absent from Hesiod and Pindar. In Pindar’s For Melissus of Thebes 4.45 eikos does occur, but means ‘like’. Eikos and closely related words occur eight times in Aeschylus and only once, in Seven Against Thebes, can it be taken to mean ‘likely’ (Agamemnon 575, 586, 760; Seven Against Thebes 519; Eumenides 194, The Libation Bearers 560, 590; Suppliant Women 283). Eikos is not problematized as a probability term until Plato, and not defined as the probable – that which generally happens – until Aristotle’s Rhetoric 1375a3.

REFERENCES
Ackrill, J.L. (1971). Plato and the Copula: Sophist 251-59. In: Gregory Vlastos, comp. Plato: a Collection of Critical Essays. Garden City, NY: Anchor Books, 210-222.
Aristotle. (1941). The Basic Works of Aristotle. R.McKeon, trans. New York: Random House.
Bateson, G. (1972). Steps to an Ecology of Mind. New York: Ballantine Books.
Goffman, E. (1974). Frame Analysis: An Essay on the Organization of Experience.  Cambridge, Mass.: Harvard University Press.
Laing, R.D. (1970). Knots. New York: Vintage Books.
Nagel, E., and Newman, J. (1960). Godel’s Proof.  New York: New York University Press.
Plato. (1961). Phaedrus. In: E. Hamilton and H. Cairns (Eds.) The Collected Dialogues of Plato (pp. 475-525), Princeton: Princeton University Press.
Quine, V.W. (1970). Russell’s theory of types. In: E.D. Klemke (Ed.) Essays on Bertrand Russell (pp. 372-387, Ch. 23), Urbana/Chicago: University of Illinois Press.
Schiappa, E. (1991). Protagoras and Logos. Columbia: University of South Carolina Press.
Simons, H.W. (1994). Going meta: Definition and political applications. QJS 40.4, 268-281.
Watzlawick, P., Beavin, J.B., and Jackson, D.D. (1967). Pragmatics of Human Communication.  New York: W.W. Norton.
Whitehead, A.N. and Russell, B. (1910-13). Principia Mathematica. Cambridge: Cambridge University Press.




ISSA Proceedings 2002 – Reasonableness Before Rationality: The Case Of Unreasonable Searches And Seizures

logo  2002-1I find Perelman’s (1979) claim that the rational and the reasonable are distinct, freestanding ideals – that they are not interchangeable terms, but in fact, in certain cases, the rational and the reasonable are in precise opposition –  to be his most important political insight. For instance, Perelman argues as applied to the law the “rational corresponds to adherence to an immutable divine standard or to the spirit of the system, to logic and coherence, to conformity with precedents, and to purposefulness; whereas the reasonable, on the other hand, characterizes the decision itself, the fact that it is acceptable or not by public opinion, that its consequences are socially useful or harmful, that it is felt to be equitable or biased (p.121).” The rational corresponds to mathematical reason; the reasonable corresponds to common sense. The rational purports to transcend all particular situations and apply equally to all persons regardless of circumstance; the reasonable is defined in relation to and bound by time, place and situation. However, both the rational and the reasonable strive for universality: the rational through an approximation of divine reason or immutable principle, the reasonable through the construction of a working consensus achieved through open and searching dialogue over the dictates of common sense and the standards of fair cooperation. It is because both the rational and the reasonable strive for universality and more precisely because each standard routinely fails to achieve universality due to the structural indeterminacies of communication as well as the contingencies that mark social life that they stand in a productive dialectical tension. Neither the rational nor the reasonable are sufficient by themselves to ensure either a true or just social order. The rational if left unchecked by the dictates of common sense and fairness would devolve into an inhuman instrumentality. The reasonable if unchecked by the systematicity of the rational would devolve into ethnocentrism. Hence, for Perelman, it is “the dialectic of the rational and reasonable, the confrontation of logical coherence with the unreasonable character of conclusions, which is the basis for the progress of thought (p. 120).”

Much of the reception of Perelman’s work, it seems, abandons this dialectical stance – where each of these distinct standards would be entertained simultaneously, not to reject one in favor of the other, but, to have them constantly modify each other – in favor of the claim that practical reason is exemplary for theoretical reason.  Perelman, and even more so his most articulate defenders such as Crosswhite (1996), Maneli (1994) and McKerrow (1982), hold that logical criteria, epistemic principles, and methods of inquiry are the result of a socialized, embodied, practical constellation of reasoning practices and norms of justification. These criteria, principles, and methods (which combine to form a community’s understanding of rationality) do not exist as antecedent conditions for discovery and justification, but have emerged over time as the consequences of dominant processes of inquiry. Hence, the criteria of theoretical reason do not govern practical reason: practical rationality is the grounds for and therefore determines, the cogency of technical rationality and sets the limits for it.  The relationship between the rational and the reasonable set out in the classic epistemic account is thereby inverted: the reasonable, understood as common sense, is the condition of possibility for the rational. It is this reversal – of the classical ideal of phronesis over the modern norm of instrumental rationality – that allows public judgment to serve as a normative standard for critiquing scientific knowledge that is the hallmark of contemporary rhetorical theory.

In what follows I support the counter-intuitive claim that it is possible to agree wholeheartedly with Perelman and his interlocutors about the nature of rationality and its social-communicative basis yet hold that this treatment of the reasonable as the grounds of rationality may have grave political consequences. That is, I contend that the move from conceptualizing the rational and reasonable as distinct, freestanding ideals to an understanding of reasonableness as a socialized, communicative, and embodied correction to the modern forms of instrumental reasoning sacrifices too much. The reasonable, as I hope to show, is better thought of in purely political, that is non-epistemic, terms as the standard of justification concerned with the legitimacy of the social application of power; a legitimacy which is cashed out in terms of both democratic participation and preserving the minimum psychic and material conditions of freedom, equality and dignity. It is this full bodied sense of the reasonable, one as Perelman argues is embodied in our common sense, but not understood as a common understanding but the common ethical sensibility called into being through an articulation of a political sense of justice, that is robust enough to serve as the dialectical counterpart to the rational.

But rather than continue this argument in theoretical terms, as I have done on several occasions in the past (1999; 2001), here I pursue this line of argument by working through a case. Specifically, I will look at the career of the reasonable in the U.S. justice system; a career that in one important aspect turns on the question of what constitutes a reasonable search and seizure. For the purposes of this essay I will focus on an important case, Terry v. Ohio (1968) and its progeny, a case that deals with the constitutionality of the police practice of “stop and frisks,” investigative detentions and searches for weapons on the bodies of criminal suspects on the street.

1.
It was (and in some areas still is) common practice for police officers to patrol minority neighborhoods for anyone they thought looked “dirty,” stop their car, jump out and throw their suspect up against the wall and give him or her a “toss” – a thorough search through his or her clothing and belongings, often accompanied by physical and verbal abuse (Harris, 1998). Prior to 1960 any contraband discovered in a toss could be used to arrest and convict, no questions asked. Mapp v. Ohio (1960), however, changed the rules for any case in which the Fourth Amendment arose. After Mapp, which held that the exclusionary rule applied to the states, police were instructed that all searches had to be based on probable cause. Probable cause, Mapp declared, exists “where the facts and circumstances within the officers knowledge and of which they had reasonable trustworthy information are sufficient in themselves to warrant a man of reasonable caution in the belief that an offense has been or is being committed (p. 655).” Thus gut instincts or bare hunches were no longer sufficient, the police had to “know based upon the facts present before them” that suspects were committing a crime if any contraband turned up in a search was to be used as the basis for conviction.

For the eight years between Mapp and Terry probable cause constituted a reasonable search and seizure. Civil rights activists applauded the court’s application of the probable cause standard, arguing that its rigorous epistemic norms helped deter malicious police conduct. Because the probable cause standard, the NAACP argued in it amicus brief in Terry, “seeks precisely to objectify, to regularize, the reasoning process by which the judgment of allowability of police intrusions is made” it is the only effective means of diminishing “as much as is institutionally possible the impact of subjective factors” underwriting the conduct of racist police officers (Greenberg, et al., 1968, p. 603). Three epistemic requirements inherent in the probable cause standard were said to help deter discrimination: the use of a reasonable man standard to depersonalize the judgment of the officer’s conduct, the removal all questions of value and any normative evaluation of the desirability of police conduct thereby reducing the question to one of objective facts, and the directive that judges remove all traces of professionally motivated intuition in favor of an “independent and autonomous judgment.”  “In short,” the NAACP concluded, “probable cause is a common denominator for police, judicial and citizen judgment. It permits the judge, after hearing the officer’s account of his [or her] observations and his [or her] inferences from them, to pass a detached, independent and objective judgment on the rationality of those inferences (605).”

By 1968, there was considerable backlash against the Warren Courts’criminal procedure decisions. Nixon was making campaign promises to reverse the Warren courts liberal jurisprudence and restore respect for law and order. In the wake of race riots in New York, Memphis, Nashville, Minneapolis, Chicago, Washington D.C., police advocates, such as the Americans for Effective Law Enforcement and the National District Attorneys Association, were arguing that the stringent requirements of probable cause should not be held applicable to entire facets of policing such as investigative stops and/or frisks for concealed weapons. It was against this backdrop that the Supreme Court agreed to hear Terry v. Ohio, a case challenging the constitutionality of stop and frisks without probable cause.

On October 31, 1963 Detective Martin McFadden, a 39 year veteran of the Cleveland Police force, observed John Terry and Richard Chilton walk down the street and peer into a store window (he was unsure if it was a jewelry store or a United Airlines ticket office) approximately twenty times between the two of them over the course of twenty minutes. During this time Detective McFadden saw a third man approach, speak briefly with Terry and Chilton and then depart. Terry and Chilton soon left the corner and went down the street where they met the third man and the three of them began conversing. Suspecting that the three men were casing the store for a robbery, McFadden approached them, identified himself and asked for their names. After receiving a “mumbled response” he spun the three men around and patted down the outside of their clothing. He found a gun in both Terry and Chilton’s coat pockets and proceeded to charge each of them with carrying a concealed weapon. Terry and Chilton petitioned the Supreme Court to resolve whether the officer had, by frisking them, arrested them without probable cause in violation of the Fourth and Fourteenth amendments.

The court held that the stop and frisk was reasonable although Detective McFadden lacked both a warrant and probable cause. Chief Justice Warren argued that because judges could not approve street stops in advance, hence a stop-and-frisk warrant is impossible, the reasonableness of a stop and frisk is not judged by the presence or absence of probable cause. To determine reasonableness, the Court first had to assess if the governmental interest served by the search is sufficient to justify the level of intrusion on the individuals’ privacy and, second, whether the officer had a good enough reason to justify the stop. The Court held that the intrusion presented by a short stop and frisk, while significant, was outweighed by the need to combat crime and that Detective McFadden’s field experience and the nature of the facts (though themselves a collection of innocent behaviors) were sufficient to warrant reasonable suspicion. Furthermore, the Court ruled that a frisk is allowed if a reasonably prudent officer would suspect that the person is armed and dangerous. Given Detective McFadden’s suspicion that a violent crime was imminent, it would be unreasonable to ask him to attempt to question potentially dangerous criminals without having the power to check them for weapons.

In Terry v. Ohio the Supreme Court radically changed the law. Police could now perform investigative stops and bodily searches on citizens as long as they had an “articulable suspicion founded upon reason.” And it is this standard of “reasonable suspicion” – more than an inchoate hunch but less than probable cause – that has been the centerpiece of Fourth Amendment jurisprudence since. A stop and frisk is reasonable, the Court claimed, when the officer can articulate the reasons for his or her suspicion that criminal activity was imminent. Reasonable suspicion differs from probable cause by recognizing that facts alone do not make for the officer’s suspicion, but these facts are always interpreted within the total context and in light of the officer’s past experience. In the hurried and often volatile context of police work, suspicion is often founded on “common-sense” and “prudence” rather than rational, dispassionate, and autonomous induction. Like professionals in other fields, a trained officer with experience in a community can often sense something wrong even if he or she does not have the tools of legal and social scientific justification to back her or his claims. To satisfy the court, the officer simply must be able to produce a coherent and plausible narrative account of why he or she suspected wrongdoing.  In essence, Terry worked by mapping a continuum of police-citizen interaction: arrest – stop and frisk – noninterference onto a continuum of epistemic seriousness: probable cause – reasonable suspicion – mere hunch. Each interval down the scale of police interference was met with a corresponding decrease in the degree of epistemic seriousness needed to justify the intrusion. Rhetorically, by placing reasonable suspicion in the middle of the continuum – between a probable cause standard that is founded upon a sterile and impossible account of rationality and the arbitrary relativism of the inchoate hunch, and mapping those epistemic norms onto the account of police conduct poised between the maximally intrusive threat of an arrest and the equally fear inducing image of police so bound by legal technicalities that they are unable to do anything – Terry works to render reasonable suspicion as an inherently attractive compromise between the need to fight crime and respecting individual freedom.  Reasonable suspicion, in short, sounds eminently reasonable.

2.
The “terry stop” and “terry frisk” have become routine law enforcement practices. And the reasonableness test set out in Terry does not just hold for stops and frisks anymore but has become the basis on which most Fourth amendment claims are decided. Instead of carving out a narrow exception to the probable cause standard, what Warren intended, reasonable suspicion is the norm and probable cause the exception – a particular type of inference, one founded on especially strict standards of proof, applicable only in those cases when a warrant is necessary. Thus, in this case, the reasonable has become the grounds of the rational.

This expansion of Terry has been a gradual one. While prevalent in the rhetoric of the Rehnquist Court – and one need to look no further than Brennan and Marshall’s consistent stream of dissents (in fact both Justice recanted their decision in Terry) to see how pliable the idea of reasonable suspicion has become in the hands of Rehnquist –  it is the lower courts that have done most of the dirty work in eviscerating the probable cause standard, requiring less and less evidence for a search and seizure (Harris, 1998). Two doctrinal devices have been key: the acceptance of categorical judgments as the basis for reasonable suspicion and a practice of post-hoc review that rationally reconstructs police accounts so they almost always meet the standard of reasonableness.

The last twenty five years have seen the lower courts steadily move away from Terry’s insistence on individualized suspicion. “Instead,” as David Harris points out, “lower courts have begun to rely on a categorical jurisprudence – that is, an ascertainment of whether the suspect fits into one or more overly broad categories, instead of an examination of facts that would tell both the officer on the street and a court deciding a suppression motion whether or not there was reasonable suspicion to believe that a person was involved in crime and armed (1998:987).” Thus police can stop based on factors such as being in a high crime area, acting evasive, and looking like you do not belong in a certain part of town, regardless of the actual circumstances. Moreover, police officers can perform bodily searches if they believe that the suspect is involved in a “highly dangerous” activity such as narcotics trafficking (the courts have also allowed searches for possession too, even though such cases are much less likely to involve weapons) and burglary. Police also are free to search any and all persons accompanying suspects, including all passengers involved in a traffic stop or, in a decision that came down just this week, all persons who are on the same bus as a suspect. The problem with such categorical judgments, in addition to the obvious fact that they are often merely pretexts for harassing minorities, is that they are very inaccurate indicators of criminal behavior and will inevitably affect many innocent citizens.

If nothing comes of the search, which is often, the officer will never have to articulate the reasons for the stop and there will be no basis by which to challenge any indiscretion or abuse. The stops that an officer has to justify are those resulting in prosecution and in those cases the officer will get considerable help from the prosecutor (and because the facts warranting suspicion are considered objective and their interpretation merely a recollection – which may have to be drawn out, like a doctor making a diagnosis on the basis of reported symptoms –  the court condones this practice). Further, since courts prefer to rely on the “common-sense” of the police officer in the field virtually all stops are affirmed.  In determining reasonableness the Court eschews the analysis of probabilities as an example of an unreasonably “Procrustean” application of legal formalism in favor a “practical, non-technical, common-sense” standard of proof. According to Justice Blackmun in United States v. Cortez (1981), “the process does not deal with hard certainties, but with probabilities. Long before the law of probabilities was articulated as such, practical people formulated certain common-sense conclusions about human behavior; jurors as fact finders are permitted to do the same – and so are law enforcement officers. Finally, the evidence thus collected must be seen and weighed not in terms of library analysis by scholars, but as understood by those versed in the field of law enforcement (418).” Even a cursory reading of recent cases provides many examples of the Court performing the most charitable of rational reconstructions, filling in the missing premises and supplying the appropriate backing so that officer’s common-sense inferences take the shape of rational argument.

If living in a high crime area and acting evasive are reasonable grounds for a stop and frisk, it is obvious that minorities will find themselves subject to a disproportionate number of searches and seizures. African-Americans and Hispanic Americans are likely to find themselves in high crime areas simply because they live and work there. Moreover, they may have very good reasons for wanting to avoid contact with the police given the history of baseless searches being used as a pretense for public humiliation and physical abuse. This results in, as David Harris points out, a vicious cycle. “Police use Terry stops aggressively in high crime neighborhoods; as a result, African Americans and Hispanic Americans are subjected to a high number of stop and frisks. Feeling understandably harassed they wish to avoid the police and act accordingly. This evasive behavior in (their own) high crime neighborhoods gives the police that much more power to stop and frisk … [Hence] those communities most in need of police protection may come to regard the police as a racist, occupying force; … an American form of apartheid, in which racially segregated areas are patrolled by police agents … imbued with special powers because of the dangerous nature of the areas they control (1994: 681).” The erosion of Terry in the name of reasonableness makes abundantly clear, as Gregory Williams concludes, “that the recent line of Terry cases … is this Court’s version of Plessy v. Ferguson.” These decisions “clearly permit the establishment of separate and unequal societies … If society has to live with results of these decisions, then the Supreme Court must face the fact that instead of contributing to the development of an equal and just society, it is contributing to racial polarization by its refusal to explicitly discuss the racial implications of it decisions (1991:586).”

Plessy’s insistence that the constitution is “color-blind” was incorporated into the Court’s understanding of the Fourth Amendment in Whren v. United States (1996).  Writing for a unanimous court, Justice Scalia unequivocally stated that the officer’s subjective intentions, even if racist, are irrelevant to the determination of the unreasonableness of a search or seizure (though the court in Brigoni-Ponce (1975), a case concerning border searches, claimed that the race of the suspect can be a positive factor in assessing whether an officer has reasonable grounds for a search). Scalia argued that if the Constitution prohibits discriminatory law enforcement practices, the remedy should be found in the Equal Protection Clause rather than the Fourth Amendment. This is an empty promise. An equal protection violation is almost impossible to prove: A defendant must show, by a preponderance of evidence, that the officer who stops him or her treats African Americans (for instance) differently, as a whole and with conscious intent, than Whites. But since police reports and judicial opinions often leave out the suspects race, and the police force and justice department is under no obligation to provide statistics on stops this claim can not be substantiated (and here it should be pointed out that Warren wrote Terry in race neutral language, even though Terry and Chilton were African American and Detective McFadden could not articulate any other reason than he “did not like the way they looked” for watching Terry and Chilton for twenty minutes).

I think there is clear and convincing case to be made that the Rehnquist Court has reversed the promise of the Fourth Amendment. Rather than securing the citizenry from “unreasonable” governmental intrusions and protecting the conditions of possibility for personal dignity for all, the current interpretation of the Fourth Amendment is now part of a strategy, with reasonableness as it primary analytical weapon, for expanding police power.

Many constitutional commentators echo the NAACP’s sentiment that that the only way to halt this evisceration is to return to a pre-Terry formulation of probable cause:  “Indeed the, the mission of stop and frisk theory to establish some third state of police powers, midway between those that can be exercised wholly arbitrarily and those available only upon probable cause, has the allure of sweet reasonableness and compromise. The rub is simply that, in the real world, there is no third state; the reasonableness of theory is paper-thin; there can be no compromise. Probable cause is the objective, solid and efficacious method of reasoning – itself highly approximate and adaptable, but withal tenacious in its insistence that common judgment and detached, autonomous scrutiny fix the limits of police power … Police power exercised without probable cause is arbitrary. To say that the police may accost citizens at their whim and may detain them upon reasonable suspicion is to say, in reality, that the police may both accost and detain citizens at their whim (Greenberg, et. al., 1968:56-57)”

As comforting as the ideal of a dispassionate and objective norm of rationalitysounds when confronted with the alternative of arbitrary or worse, malicious power – and what the NAACP really fears that malicious cops will be able to harass minorities unfettered by the law (but how can a standard of proof really deter violent police conduct if police typically do not arrest those they harass) – a return to a pre-Terry probable cause standard is neither feasible or desirable. First, the doctrinal framework simply does not exist to overturn Terry without the Court simply admitting it was wrong (think of all the convictions that would be challenged), let alone the political climate certainly weighs against such a return. Second, if the court were to return to the probable cause standard they most certainly would react by substantially narrowing the range of police conduct accountable to the Fourth Amendment (remember the most common argument of police advocates at the time of Terry was that a stop and frisk did not constitute a search or seizure at all. Furthermore police misconduct was rampant before Terry). Finally, as Ahkil Amar (1998) argues, the court may very well continue down its current path by simply watering down the probable cause requirement altogether (which in many ways it already has). “If that happens we will have betrayed the textual command of the second clause of the Fourth Amendment: We would be allowing warrants on something other than true probable cause. In other words we would be authorizing general warrants [warrants that give unlimited power to search, survey and seize anything that is held to contravene the State’s objectives, something like section 218 of the USA Patriot act]-precisely the evil the Framers meant to reject in the second clause (1116).”

But, most importantly, simply calling for a return to the probable cause standard neither answers the initial questions posed to the Terry Court – what degree of certainty can be realistically expected of police officers in the midst of investigating a crime and what sorts of preventive measure can they take to protect themselves–and thus cannot explain the shift to reasonable suspicion, nor does it alleviate the real fears of malicious police conduct.

To address the first issue is to ask for what sorts of reasons should police be able to stop and frisk a suspect. Warren’s answer, which I think a good one, was if the officer had good reasons to think that a crime was about to take place. Warren understood that having a good reason did not mean certitude or even a mathematically precise sense of probability. He also understood that in the midst of the situation police have to rely on less evidence and react in less time than most of us would ever be willing to; that is, he understood that there was such a thing as a valid hunch. Police work is inherently subjective. But, because it is so, officers have a special responsibility to constantly review their conduct and assess the reasons for their acts. And the purpose of the magistrate is to review those especially problematic cases, cases where mistakes may have been made. The contextually sensitive, temporally responsive and biographically informed practice of reasoning that Warren tried to describe in Terry does not fit the bill of probable cause, if by that term we mean an impartial, universal and objective standard of proof. But it certainly is not arbitrary either; our inferences and justifications do not have to be formally valid to be rational and thus worthy of assent. I take the central teaching of (to borrow a phrase from Warren) argumentation theory to be just that. I am sure that Warren would be appalled at the direction the Rehnquist Court has taken Terry, as were Brennan and Marshall. I am sure that he would find the authorization of mindless categorical judgments (really no more than stereotypes) and the practice of de novo review (really just a rubber stamp for police conduct) to have so cheapened his account of reasonable suspicion that he too may have wanted to recant Terry. The important question is why did his account of reasonable suspicion turn out so badly. As I have been suggesting throughout this essay, I think the problem began by positing probable cause as this unattainable – and I would argue never really followed – ideal of rationality that was always juxtaposed to the relativism of the baseless hunch, so that the middle ground of reasonableness was left wide open. As long the Court held to the requirement that the officer have a reason justifying the search and seizure – for Rehnquist it seems this can be almost any reason at all – its decisions fall into the middle ground of “sweet reasonableness” and efficacious compromise, promised by Terry. The ambiguity offered by the ideal of reasonableness, defined as a sort of epistemic middle ground, has thus, masked the real questions posed in Terry: What are the standards of rational inference that we, as a political community founded on the ideals of freedom, equality and truth, believe to be necessary to justify police conduct in particular situations and how should we evaluate those inferences? That is, Terry should have been taken as the first step in formulating of justification hierarchy, a hierarchy that could be used as a guide to determine the sorts of reasons, the types of evidence, and the relevance of particular inferences necessary to justify some application of police power (Slobogin, 1998). These are questions of rationality.

To address the second issue – the problem of malicious police conduct – we have to move beyond rationality and turn to a political conception of reasonableness. No matter how good the reasons that police have for conducting a search or seizure some forms of police conduct are simply unreasonable: for instance, the humiliation of being forced to take off your shoes and pull down your pants in public under the pretense of being searched for narcotics, being held for twenty-seven hours without being charged of a crime and being told that your detention will continue until you defecate into a bucket in a room full of police officers and other airport personal, or to being forced out of your vehicle and thrown against the hood of your car hands down, legs spread for sitting too long at a stoplight, to use some recent examples of court approved stop and frisks (Saleem, 1997). What the proponents of a return to the probable cause standard miss is that the quality of the reasons driving the officers investigation, that is the question of why he or she is conducting the search, should not determine the level of dignity, security or liberty afforded to the suspect. For once we let the answer to question of why search – the rational justification underwriting the officers conduct – determine the question of how he or she should search we lose much of our ability to secure persons from malicious police conduct. The question of whether a stop or frisk is reasonable does not, then, turn on epistemic grounds (unfortunately proponents of racial profiling and coercive police tactics are not always inarticulate and irrational). Rather, to determine reasonableness the court must answer two questions: Does the police conduct involved in the search and seizure contravene the psychic and material conditions necessary for freedom, equality, and dignity – in short the requirements of full citizenship? And, secondly, is the officer’s conduct proportionate to the gravity of the offense. In other words, has the officer used the least intrusive means available, or at a minimum the least intrusive means reasonably available, to conduct the stop and frisk in a manner needed to both effectively investigate the possibility of criminal activity and to protect her or his safety? In Terry, Warren treated both of these questions carefully; recognizing that the bodily integrity and dignity of Terry and Chilton had indeed been compromised, he argued that the brief detention and surface level search of their outer clothing for weapons was indeed the least intrusive response reasonably available to Detective McFadden. Unfortunately, Warren used the term reasonable to refer to both the test for proportionality and the test of the veracity of McFadden’s suspicion in deciding to stop Chilton and Terry. In doing this he sacrificed the opportunity to flesh out a standard of reasonableness robust enough to be used as the basis for challenging malicious police conduct, perhaps through administrative actions, injunctions and civil suits for discrimination. I hope that the arguments I have given here are sufficient to convince you that this mistake is much more than an issue of semantics, or at the very least this semantic and conceptual confusion has resulted in some rather grave consequences.

While many commentators have cursed the framers of the constitution for being too vague in their formulation of the Fourth Amendment, I think their solution was brilliant. By writing two grammatically freestanding clauses – a reasonableness clause that defines the parameters for the coercive power of the state from within the right of all persons to be secure from violations of the conditions necessary for personal freedom and dignity (which I understand as the essential meaning of being secure) and a warrant clause that requires that all searches and seizures would be justified by reasons that are “manifestly rational” (Johnson, 2000) –  and conjoining them rather than separating them by a period, the framers set out in beautiful detail the proper relationship between the rational and the reasonable. They are freestanding ideals, distinct in nature and each demanding its own unique form of justification; yet complementary, each providing an essential check for the other.

REFERENCES
Amar, A.R. (1998). Terry and fourth amendment first principles, St. John’s University Law Review, 72, 1097-1131.
Crosswhite, J. (1996). The Rhetoric of Reason: Writing and the Attractions of Argument. Madison: University of Wisconsin Press.
Greenberg, J. Nabrit, J. Meltsner, M, Zarr,. M. & Amsterdam,  A. (1968).  Brief for the    N.A.A.C. P. legal defense and educational fund, Inc., as amicus curiae. In: Kurland, P. and Casper, G. (Eds.), Landmark Briefs and Arguments of the Supreme Court of the United States: Constitutional law, Volume 66 (pp. 565-645) Washington: University Publications of America.
Harris, D.A. (1994). Factors for reasonable suspicion: When black and poor means stopped and frisked, Indiana Law Journal, 69, 659-688.
Harris, D.A. (1998). Particularized suspicion, categorical judgments: Supreme Court rhetoric versus lower court reality under Terry v. Ohio, St. John’s University Law Review, 72, 975-1023.
Hicks D. (1999). Public reason and the political character of reasonableness. In van Eemeren, F. H., Grootendorst, R., Blair, J.A., & Willard, C.A. (Eds.), Proceedings of the Fourth International Conference of the International Study of Argumentation (pp. 340-343). Amsterdam: Sic Sat.
Hicks, D. (in press). Reasonableness: Political not epistemic. In Goodnight, G.T. (Ed.), Proceedings of the Twelfth NCA/AFA Conference on Argumentation, August 2001. Washington: National Communication Association.
Johnson, R. (2000). Manifest Rationality. Mahwah, NJ: Lawrence Earlbaum Associates.
Mapp V. Ohio, 367 U.S. 643 (1961).
Maneli, M. (1994). Perelman’s New Rhetoric as Philosophy and Methodology for the Next Century. Amsterdam: Kluwer Academic Publishers.
McKerrow, R.E. (1982). Rationality and reasonableness in a theory of argument. In Cox, J.R. & Willard, C.A. (Eds.), Advances in Argumentation Theory and Research (pp. 105-122), Carbondale: Southern Illinois University Press.
Perelman, C. (1979). The New Rhetoric and the Humanities. Dorchedt: Ridel.
Saleem, O. (1997). The age of unreason: The impact of reasonableness, increased police force, and colorblindness on terry stop and frisk. Oklahoma Law Review, 50, 451-493.
Slobogin, C. (1998). Let’s not bury Terry: A call for rejuvenation of the proportionality principle. St. John’s University law Review, 72, 1053-1095.
Terry v. Ohio, 392 U.S. 1 (1968).
United States v. Brigoni-Ponce, 422 U.S. 873 (1975).
United States v. Cortez, 499 U.S. 411 (1981).
Whren v. United States, 517 U.S. 806 (1996)
Williams, G. H. (1991). The Supreme Court and broken promises: The gradual but continual erosion of Terry v. Ohio. Howard Law Journal, 34, 357-388.




ISSA Proceedings 2002 – Fundamentalism Versus Cosmopolitanism: Argument, Cultural Identity, And Political Violence In The Global Age

logo  2002-1
In the series of essays to which we add the current paper (Hollihan, Riley, & Klumpp, 1993; Klumpp. Riley, & Hollihan, 1995; Riley, Hollihan, & Klumpp, 1998; Hollihan, Klumpp, & Riley, 1999; Klumpp, Hollihan, & Riley, 2001), we have considered a number of threats to democratic community at the turn of the 21st century, including the erosion of state power, the demise of the mass media, and development of extremist groups who grow from the openness of a democracy. None of these, however, represent a threat quite like the attacks of September 11, 2001. Most obviously the 9-11 attacks involved the use of violence against the United States and the death of three thousand citizens of the world, predominantly Americans. In addition, the 9-11 attacks presented an external threat; our work has highlighted internal problems that threaten democratic communication.

But, in addition to their violent destructiveness, the 9-11 attacks certainly had profound implications on democratic communication. Some of the effects have come in reaction to the threat to life and property. The reaction of the democracies has been at least partially to limit democratic rights such as free speech and the press. All democratic nations are tempted to forfeit democratic rights in the face of threats to security. The United States has been no exception. The White House quickly moved to silence news coverage of the videotape produced by Osama bin Laden’s organization soon after it was released, with a rather transparent warning of some hidden coded message. The flames of patriotism stoked by President Bush’s polemic declaration of an evil enemy quickly closed debate over the motivations for the intensity of Islamic radicalism. Susan Sontag’s rather mild curiosity about the roots of support for the radicals was met, not with disagreement, but with a barrage of ad hominem accusation including a questioning of her patriotism[i].

The attacks on democratic discussion are all the stronger because when President Bush declared this an act of “war,” it became the first war of the information age. The attacks were clandestine, a failure of our intelligence gathering, exploitive of information in the public domain. These story lines turned democratic freedom-to-know into the enemy of our security. With no sense of irony, the amount of information available to our citizens was systematically diminished, governmental information withheld from depository libraries, campaigns of disinformation promoted in the military, and a drumbeat of unsubstantiated, frightening threats substituted for a texture of inquiry and proof.

All of these diminutions of our freedom, cultural and statutory, were the reactions of a society under attack. Although they are real threats to democratic communication, they should not blind us to the threats to democratic community by those who perpetrated the attacks of 9-11. The movement supporting the attacks represents a new reality in the 21st century world and, we believe, a real threat to democratic values. In this essay, we propose to examine the challenges of the movement supporting the 9-11 attacks to democratic communication. We will begin by arguing that the movement is a fundamentalist identity movement. Then we will locate the specific challenge to democratic values represented by this new breed of opponent. And finally, we will identify the alternative to our military initiative: an initiative to foster the cosmopolitan values of a viable democratic politics.

1. A Fundamentalist Identity Movement
The movement known as al Qaeda is at its heart a fundamentalist identity movement. Perhaps its closest counterpart is the Christian Identity Movement, led by Richard Butler, and strongest in the western United States in the 1980s. Both movements employ violence and terror to achieve their ends. Both are religious in basic ways, employing the resources of their religion to hold members and motivate violence. Both reject national governments as corrupt traitors to religious ideals. The size, support, and power of al Qaeda, however, dwarfs the Christian Identity Movement. Al Qaeda poses an enormous threat because of is a movement tied to the character of our time.

Three characteristics are crucial to understanding the nature of current Islamic fundamentalism. First, the movement is trans-national. Its historical roots may be in pan-Arabism of the 1950s and 1960s, although pan-Arabism was more closely tied with the attempt to convert Arab unity into a nation-state. Al Qaeda operates largely outside the structure of nation-states. Like modern business organizations, many sophisticated trans-national characteristics of al Qaeda offer certain operational advantages. It has developed sophisticated information-gathering ability. It has developed advanced methods of obtaining operating capital and is capable of moving its operating funds rapidly through the financial world. It values training and thorough preparation for operations. It recognizes the differing characteristics of various nation-states and is capable of locating training and operational facilities to its advantage (Held, 2002). Although its violent methods set it apart from business corporations, it also finds ways of outsourcing its needs. After all, the planning for the September 11 attacks trained personnel in American flight schools, adapted methods to the security structures of American airlines, and acquired the powerful instruments of American mobility to use as missiles to destroy the financial and military symbols of American global hegemony.

There is, of course, an irony in this trans-nationalism rooted in al Qaeda’s adoption of modern organizational techniques: it exemplifies the problem of “policing” trans-national organizations that are operated largely beyond the reach of the modern nation-state. At the same time, when we read the rhetoric of Osama bin Laden, his enemy is also trans-national: the hegemonic secularization that co-opts Islamic states and does the bidding of the Infidel. Of course, the United States’ hegemonic relationship to the world globalization movement identifies it as a primary target of the movement, but the targets of September 11 were in a real sense the financial and military power bases of the globalized world.

It is also ironic, of course, that the Bush administration has chosen to counter this trans-nationalism with a renewed American nationalism. Bush’s rhetorical appeals are to American exceptionalism and patriotism. Although he speaks of a multi-national alliance, his European allies recognize the nationalistic center of his policy. The American response is to attempt to rigidly enforce its security by reimposing tight borders – which runs contrary to the cross-border ethic of multiple alliances and globalization.

The second characteristic that marks the Islamic fundamentalist movement is the use of religious rhetoric and motivation in establishing its identity. It speaks the language of the power of Islam, the duty to Allah, the doing of his bidding, and the promise of religious martyrdom. It reads the Koran as the instructional word of an active God directing human affairs. In identifying its enemy, however, the emphasis is not on alternative religions but on the secular attack on Islam. The contrast is drawn to secularization. It condemns the secular governments of Moslem countries along with the irreligious culture of the West (Hill, 2001). This movement recognizes the same power in religion that is at the rhetorical roots of the Christian Identity movement. Religion is an established rationale of authority. So voluminous are religious texts that when combined with their authority they offer an irresistible source of dogma. Characteristics of particular religious beliefs, such as the existence and conditions of an afterlife, provide solutions to problems of motivation unavailable to secular strategies. Religion lies at the core of the identity of believers. The force of that authority makes a potent rhetorical identity.

But the third characteristic raises the religious identity to a fundamentalism. Fundamentalism is marked by its single-minded commitment to a single source of truth and action, and this movement has that commitment. The movement is monist not pluralist. There is one truth, the truth of Allah, of the Koran, of Islam. There is no tolerance of other opinions or of non-Muslims. It is also totalist: there is a centripetal force that pulls all of life into the perspective. It is not simply religious, but political, cultural, social, and military. It dictates the patterns of personal life as well as life in the society. And finally, these characteristics come together in a fundamentalist identity. It demands single-minded dedication to its commitments. It is incommensurate with other ideas and movements. It demands allegiance.

Fundamentalist identity movements are by their nature anti-democratic. The genius of Madisonian democracy was to recognize the importance of pluralism to democracy (Madison, 1788). A democracy would be composed of many interests, and its citizens would identify with various interests in different combinations. Importantly, the citizen’s central identification would be with the democracy and the democratic process, not with any of the particular interests. Fundamentalism is in tension with democracy because it rejects the notion of a plurality of interests as the driving force of human interaction. Instead it relies on a monism of belief. When this fundamentalism is combined with an identity movement, the result will inevitably displace the basic values of democracy. Thus, the challenge to democracy of a fundamentalist identity movement is profound.

There is a complicating factor in this particular movement, however. Fundamentalist identity movements exploit the possibilities of specific ideologies to turn their adherents into fanatics. Although they share characteristics with other like movements, they are differentiated through the difference in their ideology. To understand the appeal of the Islamic Fundamentalist movement, we must explore that ideology from which it draws.

2. The Ideological Base of the Islamic Fundamentalist Identity Movement
Available to the Islamic Fundamentalist movement is an ideology that has developed over many decades and that has a large Islamic. The ideology explains historic political and economic conditions to appeal to many non-fundamentalist Muslims at the threshold of the 21st century. Serious economic grievances spark outrage. Global inequality has increased, not lessened, in this latest era of globalization. In 1960, the richest fifth of the world’s population had a total income thirty times greater than the poorest fifth. In 1998, however, this ratio had grown to seventy-four to one (Ferguson, 2001). The economic disparities are keenly felt in the Middle East where regimes are deeply dependant upon oil revenues. Oil revenues have dropped from their peak at about $225 billion in 1980 to approximately $55 billion today. These decreases have had profound effects in the Middle East. These oil revenues are the most important source of governmental income supporting the social welfare system. Just as a rising tide of oil revenue lifted all boats, an ebbing tide left economic distress. There are few opportunities for employment in much of the region. Indeed, were it not for oil, the Middle East would rank lower than Africa in economic development (Hill, 2001). At the same time that oil revenues and government incomes are shrinking birthrates in the region are soaring. The population is becoming younger, more literate, and as a result of exposure to the mass media, better informed about the conditions and lifestyles beyond their borders. This in turn has left them feeling more frustrated because they have been denied many of the pleasures that they see around them. Dramatic population migrations have brought people from small villages to urban centers, where they often find themselves living in teeming slums, nagged by the problems of unemployment, widespread graft and corruption, inefficient bureaucracies, and severe environmental and health problems (Amanat, 2001). Still others have joined the exodus from the Middle East and Asia to the cities of Europe and North America in hopes of better opportunity, but in many cases they have found themselves instead exiled to overcrowded ghettoes, consigned to menial jobs.

Despite the severity of these economic conditions in much of the Muslim world, economic deprivation alone cannot account for the development of the terrorist networks. Most of the terrorists who hijacked and steered those airplanes into occupied buildings were not uneducated, uninformed, impoverished rural people who were completely ignorant about the west or who knew the outside world only through the descriptions of their Mullahs. Most were instead well-educated, middle or upper class Arabs. Many had lived for a time in the West and thus were familiar with the values, culture, and political systems that they were attacking. They were said, for example, to have consumed alcohol, watched a lot of American television, played video games, and even frequented topless bars (Amanat, 2001). The terrorists were thus not all unemployable victims of the new global economy. Most of them held university degrees and had demonstrated that they could find and hold highly skilled jobs. For example, Mohammad Atta, who flew the airplane into the North Tower of the World Trade Center, was the well-educated and well-traveled son of an affluent Cairo attorney (Hill, 2001). What motivated them was not economic deprivation but their all-consuming ideology (see Kuran, 2002). So who were these terrorists and what motivated their hatred?

In Muslim nations in the Middle East and in Asia the daily prayers, Friday sermons, and Koran study groups are all places to ritualize and express identity. But increasingly, this identity is also expressed through street demonstrations, the circulating of pamphlets, and with anti-establishment, anti-secular, anti-American, and anti-Zionist messages (Amanat, 2001). For Muslims living in Europe and America the connections between the religious community and the political ideological community may be even more significant. In many European cities, for example, the members of these Diaspora are very much treated as outsiders – they are cast as “the other” and exiled to neighborhoods where they are encouraged to live among their own. Although certainly adherence to radicalized Islamist beliefs is the exception rather than the norm in these communities, evidence suggests that these communities may be a fruitful breeding ground for the development of such sentiments.

The terrorists on the planes and who make up the network that is at war with America are not so much unified by their Muslim faith as by their Islamist political philosophy. As such they are committed to a radical global transformation. Kuran (2002) notes:
Islamists believe that to be a good Muslim is to lead an “Islamic way of life.” In principle, every facet of one’s existence must be governed by Islamic rules and regulations – marriage, family, dress, politics, economics and much more. In every domain of life, they believe a clear demarcation exists between “Islamic” and un-Islamic behaviors. Never mind that in all but a few ritualistic matters the Islamists themselves disagree on what Islam prescribes. They have been educated to dismiss their disagreements as minor and to expect a bit more study of God’s commandments to produce a consensus about the properly Islamic way to live. (pp. 1-2).

Adherents of this philosophy also believe that the march of history supports their views. They believe that communism and capitalism are destined to fail because they breed injustice, inequity, and inefficiency. The fall of the Soviet Union is viewed as evidence to support their claim, for they believe that just as communism collapsed once people discovered that the tyrants could not hold onto their power through force alone, so too capitalism will ultimately fail because it “breeds emptiness, dissatisfaction, and despair even among the materially successful” (Kuran, 2002, p. 2).

The Islamists propose an Islamic economic system, the key elements of which would entail a banking system that avoids charging interest, an Islamic redistribution system based on the principles of the Koran, and a set of norms to insure fairness and honesty in the marketplace. Kuran (2002) observed government supported “economic Islamitization” projects undertaken in Sudan, Pakistan, and Iran, all failures. The argument is that they failed only due to the corruption caused by “Westernization, which masquerades as globalization and whose chief instruments are the military, cultural, and economic powers of the United States” (p. 2).

The conviction that Islam might offer the world an economic system that can outperform alternatives emerged in the 1930s in India when some Muslim leaders proposed that to be a Muslim was to live differently from both Hindus and Westerners. They then undertook to show that Islam offered prescriptions for conduct in all domains of life. Concepts such as Islamic economic theory and Islamic banking were developed and supported by clerics seeking to enhance their authority. Muslim governments supported these efforts in order to demonstrate their own religious commitment and conviction and to stay in power. The Saudis, for example, have given financial support to Islamic universities in many nations and have sponsored conferences on the Islamization of knowledge. The Saudis also created institutes to train Islamic bankers (Kuran, 2002). In addition, the Saudis funded the development of conservative religious schools throughout the Muslim world, which helped to spread the Islamist political ideology along with the religious lessons. Most of the terrorists on the four hijacked airplanes were the product of this Islamic educational system.

A profoundly important element of this Islamist philosophy is that it has served as a means to unite a diverse and dispersed Muslim community by creating a powerful source of identity and belonging. The membership of this Islamist community transcends nation-states and cultures. It is composed of Saudis, Egyptians, Libyans, Iranians, Lebanese, Pakistanis, Bangladeshis, Iraqis, Moroccans, Algerians, Indonesians, Malaysians, and yes, Americans, who live may live in the Middle East, Asia, North America, or Europe. Indeed, the sources of identity have been de-territorialized, and “the rhetoric of mobilization recentralizes, in a non-territorial way, identities that have become fragmented within the nation-state context” (Kastoryano, 2002, p. 1). The participants in this network are often highly assimilated both socially and economically in their new places of residence, while simultaneously keeping close contact and maintaining a strong sense of identity with their home country, and with a network of ideological compatriots with whom they identify and on whose behalf they may act (Kastoryano, 2002).

That mosques, community organizations, and language schools have become central nodes in this network of Islamist ideology should be expected since these are the natural places where these people come together to discover fellowship and to form social contacts. Many of the followers of this more radicalized form of Islamist philosophy are thus followers of a very conservative view of Islam. They are deeply opposed to an active role for women in terms of educational and professional life. In many cases, even in European cities they urge women to wear veils and to attend single-sex schools if they are to be educated at all. They are also strongly opposed to Western music, the arts, and entertainment. They are obsessed with a fear that the purity of Islam will be undermined by contact and influence from other religions. They are increasingly anti-Jewish and anti-Christian because they fear that these faiths are united and seek to destroy “true Islam” (Roy, 2002, p. 3).

It is within this Islamist philosophy that the fundamentalist identity movement that supported the 9-11 attacks has grown. Characteristic of such movements is “the ingrained human habit of identifying oneself in terms of the group; of viewing one’s own in-group as somehow ‘special’ and superior to others; and of discouraging social intercourse (or any other type of intercourse) with members of the ‘out-group’” (Hutcheon, 2001, p. 1). Identity may exploit a common oral tradition, ethnic identification, or a set of sacred beliefs that identify the group’s members as uniquely gifted or chosen by history or by gods. The key to this identity lies in the sense of security that is provided by belonging. Unfortunately, history has demonstrated that the more intensely people may come to feel that they belong to their own group the more hostile they may become to outsiders. The feelings of identity among those who adhere to radical Islamist viewpoints may express their dissatisfaction both with the direction and with the rapid pace of social change in the era of globalization.

The complexity and closeness of the contemporary age makes tolerance for fundamentalism and particularly fundamentalist identity movements difficult, to say nothing of the problems posed by belief systems that emphasize the importance of excluding infidels. We must therefore seek strategies that focus on argumentative premises and shared values that will penetrate the Islamist philosophy. This will be difficult given the understandable appeal of identity politics, and the rich broth of economic and political despair within which it grows. But identities are not handed to us intact at the moment of birth. They are constructed through education, socialization, through exposure to the mass media, and through the participation in social and communal rituals. Thus there are possibilities for counteraction. We believe in the inherent strength of a democratic cosmopolitanism combined with an active political sphere to undermine the broad support of the Islamic fundamentalist identity movement.

3. Encouraging Global Cosmopolitanism
The core underlying principle of the cosmopolitan view is the conviction that “human well-being is not defined by geographical and cultural locations, that national or ethnic or gendered boundaries should not determine the limits of rights or responsibilities for the satisfaction of basic human needs, and that all human beings require equal moral respect and concern (Held, 2002, pp. 11-12). These views represent the triumph of a humanist philosophy that emphasizes the values of individuals across the entire lifespan, combined with concern for an integrated society in harmony with its environment (Hutcheon, 2001). In politics these principles are neither new nor arbitrary. They are instead the fruit of human progress from the time of the enlightenment forward. They have been applied to relationships between nations and cultures since at least the end of World War II, and were affirmed as key principles in the 1948 UN Declaration on Human Rights (Held, 2002). What has been lacking is not the expression of principle to guide us, but the institutions – political, legal, financial, and moral – to move us forward from the promise to the material reality of a true cosmopolitan vision.

Scholars of argumentation and human communication need to establish a major role at this point in our historical development, for ours is a discipline that recognizes that the institutional foundations of a cosmopolitan civil society cannot merely be declared or imported from one society to another. Instead they must emerge through deliberation and open dissent. This entails the commitment to facilitate “an open discourse in which substantive conclusions are not predetermined, but are uncovered in the process of argumentation itself” (Hanson, cited in Ivie, 1996, p. 4). Unfortunately, a climate of public deliberation and dissent is lacking in the Middle East. Most of the governments of the region are not democratic and are profoundly closed to the possibility of the formation of a dissenting public. What should alarm us even more than the democratic deficit in the Islamic nations, however, is the damage that the current war rhetoric may pose for the health of democracy in the West.

In an attempt to allay public fears and to provide a sense of security, policy makers have emphasized the importance of protecting their national borders, securing all airports, profiling potential terrorists, expanding the rights to eavesdrop on electronic conversations, and adapting new forms of scientific surveillance technology which will permit them to recognize wanted terrorists. In the United States this has also led to an “M & M” color-coded risk analysis homeland security system, the appointment of a homeland security czar, public acceptance of long lines of weary travelers in airports, and arbitrary security searches of eighty year old grandmothers waiting to board airplanes with their grandchildren. Aside from the quite obvious risks that such a security apparatus may indeed undermine the democratic freedoms which underlie our political system, they serve as a mystification because they likely give people the impression that things are somehow safer now than they were before, even though any new terrorist attacks will likely take some other form. The experiences of Israel in the recent suicide attacks have shown that security and revenge-motivated violence are largely ineffective against terrorist attacks.

A cosmopolitan argument view would suggest that rather than focusing on an imaginary Maginot Line against terrorist aggression we should instead focus on activities that will enable individuals and groups with different cultural and value systems to learn how to coexist despite their diversity (Bigo, 2002). Again, we are not advocating that governments should ignore security or policing concerns, only arguing that a focus on these policies alone will never break the cycle of terrorist violence. Such a focus on national security may also diminish the likelihood that we can continue to progress toward a truly global rule of law and cosmopolitan democratic governance. As Jayasuriya (2002) observed: “The most serious danger these events pose is their potential to usher in under the appealing cloak of ‘security’ a debilitating form of ‘anti-politics’ that marginalizes the constructive conflicts – the debate and discussion – that animate the public sphere in liberal polities” (p. 1). We have, of course, already seen evidence of this in the United States where even members of Congress have been deemed somehow “unpatriotic” because they were so bold as to question the Bush administration’s handling of the war on terror (Bush dismisses, 2002).

4. Politics is Communication
We propose instead a focused effort to increase cosmopolitanism with an initiative to provide an enrichment of democratic possibility. Politics is formed through conversation. A political rather than a military response to the terrorist crisis will depend on our ability to create deliberative activities that engage global audiences and that expose the dangers and the limitations of fundamentalist identity movements and ideologies of exclusion.

Such conversations must recognize as a starting point that we may never succeed in persuading the terrorists. Fanatically committed to a fundamentalist identity movement, their views are incommensurate with democracy. This is why we are not pacifist about the movement. The terrorists may have to be treated as a criminal class, although we would argue that they should be accorded the full rights of a democratic political system and not exiled to an illegitimate corner of Cuba without proper trials. They are, however, a movement and just as important as the military actions to undermine their power is the rhetorical confrontation for the hearts and minds of those susceptible to their message. The audience for political arguments should, however, be the world’s citizenry at large, for the terrorist networks will find it much more difficult to prosper if they are denied the support of the ordinary citizens – including those who are often referred to in the press as the “Arab street.”

Our search for conversational politics should involve attempts to identify a set of common problems and premises as starting points for argumentative engagement. Differences might be overcome as people discover their common concerns. The first and most obvious are arguments that address human welfare – concerns about health, safety, and individual sustainability. The second involves the material reality of the global financial system and the role of trans-national governments and institutions in the creation of sustainable macro-economic conditions in the Islamic regions of the world. The third are arguments surrounding the issues of Western global hegemony, and particularly American cultural exports. The fourth area recognizes contested spaces of legitimacy – of policies, territories and military engagement, and the protection of the environment. At least one issue that will have to be overcome is the understandable skepticism that people in developing nations have about our concern for their well-being. Certainly this process will be time consuming and difficult. We will uncover points of difference that seem beyond accommodation or agreement, but it is in the very process of discovery through engaged civic arguments that deliberative democratic institutions are both institutionalized within political systems and internalized within citizens.

The arguments should also attempt to confront the assumptions of the Islamist’s viewpoints about the unique character and contributions of Islam to economic theory. If Islamic economics has something to contribute to economic conditions and to the welfare of the region it should be evaluated and revealed in open deliberative conversations. Most Western economists are convinced that although there are elements of Islamist economic theory that are important to today’s complex global economy, for example, concern for honesty, fairness, and trust in the marketplace, the theory does not and cannot provide a viable alternative to contemporary banking and commerce (Kuran, 2002). These issues are open to deliberation and debate and claims are subject to falsification and refutation. These are the kinds of arguments that even people of alternative religious commitments and passions may find premises upon which they can agree to open up an avenue of deliberation[ii]. Furthermore, such conversations are significant for they may finally open up to global discussion in a serious way the fundamental economic inequalities that are unfortunately the product of globalization. Therefore arguing about the failure of Islamic economic theories to “deliver the goods” to the citizenry in those nations that have experimented with such an approach should also entail a similar challenge to the proponents of Western capitalism to demonstrate that their free markets can do a better job of addressing economic inequalities in the developing world. What is best understood about argumentation theory is that the willingness to engage in arguments implies the possibility that you will be proven wrong in your own beliefs and assumptions[iii].

The democratic regimes of the West – especially the United States and the nations of the European Union – must also use their influence to actively create the possibility for democratic participation in the developing world. Certainly millions of dollars have been devoted to the promotion of civil society projects by governments and by private foundations. Unfortunately, little seems to have been achieved with most of these programs primarily because real politicks has been permitted to triumph over meaningful social and political change[iv]. A secure oil pipeline and a stable tyrant have more often seemed to serve the interests of Western powers than has the uncertainty and risk entailed by the formation of a genuinely vibrant political democracy. It is worth noting, for example, that Iran may currently be closer to a democratic state – they at least have a democratically elected parliament – than is our close ally Egypt. The absence of forums for political conversation and the restrictions on the press have no doubt helped the mullahs to control the setting, shape and tenor of what passes for oral argument in much of the region. The dominance of the Al Jazeera broadcast system is similarly limiting (Richey, 2001).

We must also recognize that the development of democratic institutions cannot merely be provided to others as a “gift” from the more enlightened and advanced nations of the west. “It [democracy] must be seized by them because they refuse to live without liberty and they insist on justice for all” (Barber, 1995, p. 279). The United States and other democratic nations can, however, help to prepare the citizens of these nations for democracy by working to establish the foundations for both civil society and a civic culture of deliberative discourse. At least one essential first step is that we pressure our governments to no longer climb into bed with tyrants and dictators only because they promise stability and/or access to raw materials or markets that we seek.

There is an obvious circularity to the arguments that we have advanced. For as Barber (1995) observed:
Strong democracy needs citizens; citizens need civil society; civil society requires a form of association not bound by identity politics; that form of association is democracy. Or: global democracy needs confederalism, a noncompulsory form of association rooted in friendship and mutual interests; confederalism depends on member states that are well rooted in civil society, and on citizens for whom the other is not synonymous with the enemy; civil society and citizenship are products of a democratic way of life. (p. 291)

Barber also noted that “until democracy becomes the aim and the end of those wrestling with the terrors of Jihad and the insufficiencies of McWorld, there is little chance that we can even embark on the long journey of imagination that takes women and men from elementary animal being (the thinness of economics) to cooperative human living (the robustness of strong democracy)” (p. 291).

The civic conversations that may lead to the democratization of the developing world should not, of course, be confined to those regions. Democracy is not necessarily prospering in the United States either – witness the declining rates of political participation, the emphasis on negative “attack-style” politics, and the domination of campaigns by the interests of big contributors and lobbyists (Hollihan, 2001). Europeans are similarly beset as their European Union – which offers the promise of multinational government – is hampered by a profound dearth of opportunities for direct civic engagement[v]. Yet, in an era of globalization it is vital that all people are engaged in such deliberations. Classrooms, churches, academic conferences, and other public halls need to become places where people come together to engage in conversations that lead to cosmopolitan worldviews. In this era of globalization we have a heightened awareness of the political power that can be leveraged by the networks of transnational elite and professional cultures through the development of transnational political lobbies and alliances (Held, McGrew, Goldblatt & Perraton, 1999). We now understand the power of new social networks to contribute to identity formation and political participation in ways that permit people to influence the policies of their own nation and others and of corporations and NGOs as well. Through the formation of such new networks people can come to identify their shared interests and commitments and can challenge their traditional ways of knowing. This is not a blindly optimistic declaration of how the Internet can save democracy. It is, however, recognition that these new public spaces can reinvigorate democratic connections and motivate citizens to individually and collectively act to enrich their own democratic spaces (Hollihan & Riley, 2000; Hollihan & Riley, 2001).

Academics and policy makers alike need to rethink the principles of multiculturalism. The benefits of “interculturalism,” the recognition that all cultures have attributes to be appreciated and values should be embraced. But the notion that every culture is of equal worth – with equal rights to be protected and preserved intact within a global society should be rejected (an argument also advanced by Kuran, 2002). There are certain cultural practices that fail to live up to the cosmopolitan ideals of protecting individuals and societies. Such cultural forms need to be intellectually rejected and their consequences revealed and condemned in public forums (Hutcheon, 2001).

As scholars and critics of public argumentation our voices need to be heard as we use our classrooms, our publications, and our social and political influences to expand the reach of cosmopolitan arguments. Ours is a discipline that emphasizes the promise and possibilities of human reason and dialogue. Over time, these principles of reason will be more effective weapons against the tyranny of terror than will military actions or the new isolation of security. We must use the opportunities that are afforded to us to speak, and create platforms, both material and virtual, for conjoined discourse that explicitly calls for social, political, and economic justice. The plight of the displaced Palestinians, the ravages of world poverty, the lack of access to educational opportunities and health care, and the culture of fear, violence, desperation, hatred, and suicide that dominates in the Middle East should capture our attention and be a part of our own civic conversations. Finding the courage and the will to exercise our voices is the first step in our own commitment to a cosmopolitan politics.

NOTES
[i] Sontag’s (“Talk,” 2001) statement first appeared in the New Yorker and was met immediately with vitriolic condemnation. She later wrote: “These rather banal observations won me responses that, in a lifetime of taking public positions, I’ve never experienced. They included death threats, calls for my being stripped of my citizenship and deported, indignation that I was not ‘censored.’ In newspapers and magazines I was labeled a ‘traitor’” (Open Society Institute, 2001). Representative of the milder responses was Miller and Ponnuru (2001) in National Review’s on-line edition.
[ii] In an earlier paper we explicitly discussed the problems inherent in arguing across cultures in a global age. In that work we suggest the need for a new “economy of argument” – a vocabulary that helps locate the shared and divergent qualities of material facts and conditions. See: Klumpp, Hollihan, & Riley (2001).
[iii] In that it is by now known that Saudi Arabia and other nations have actively funded universities and institutes designed to teach and research Islamic economics, the west should respond with generous educational grants to Middle Eastern and other universities for research and comparative study into a wide range of economic models. Such research might also lead to better understanding as to why it is that income disparities have grown at exponential rates in the United States as a product of globalization and they are now growing as well in nations such as the People’s Republic of China as they embrace capitalism (Smith, 2002).
[iv] For a very interesting analysis of the challenges facing civil society development projects and the implications for argumentation scholars, see Cheshier (2001).
[v] Held, McGrew, Goldblatt and Perraton (1999, p. 375) report, for example, that 97 percent of Europeans claim never to have had any direct contact with the EU or any of its various institutions or events.

REFERENCES
Amanat, A. (2001). Empowered through violence: The reinventing of Islamic extremism. In S. Talbott & N. Chanda (Eds), The age of terror: America and the world after September 11 (pp. 23-52). New York: Basic Books,.
Barber, B.R. (1995; rpt. 1996). Jihad vs. McWorld: How globalism and tribalism are reshaping the world. New York: Ballantine Books.
Bigo, D. (2002). To reassure, and protect, after September 11. Social Science Research Council/After September 11. Retrieved June 17, 2002, from http://www.ssrc.org/sept11/essays/bigo_text_only.htm.
Bush dismisses Daschle remarks on war. (2002, February 20). Washington Times. Retrieved June 2002 from http://www.washtimes.com/national/20020302.84156646.htm.
Cheshier, D. (2001). Virtual democracies: American democracy promotion and its implications for legitimated public argument. In G.T. Goodnight (Ed), Arguing communication and culture (Vol. 2, pp. 561-570). Washington: National Communication Association.
Ferguson, N. (2001). Clashing civilizations or mad mullahs: The United States between informal and formal empire. In S. Talbott & N. Chanda (Eds.), The age of terror: America and the world after September 11 (pp. 113-141). New York: Basic Books.
Held, D. (2002). Violence, law and justice in a global age. Social Science Research Council. Retrieved May 2002 from http://www.ssrc.org/sep11/essays/held_text_only.htm.
Held, D.; McGrew, A.; Goldblatt, D. & Perraton, J. (1999). Global transformations: Politics, economics and culture. Stanford: Stanford UP.
Hill, C. (2001). A Herculean task: The myth and reality of Arab terrorism. In S. Talbott & N. Chanda (Eds.), The age of terror: America and the world after September 11 (pp. 80-111). New York: Basic Books.
Hollihan, T.A. (2001). Uncivil wars: Political campaigns in a media age. New York: St. Martin’s Press.
Hollihan, T.A., Klumpp, J. F., & Riley, P. (1999). Public Argument in the Post-Mass Media Age. In F.H. van Eemeren, R. Grootendorst, J.A. Blair, & C.A. Willard (Eds.), Proceedings of the fourth international conference of the international society for the study of argumentation (pp. 365-371). Amsterdam: Sic-Sat, 1999.
Hollihan, T.A. & Riley, P. (2000). Argument and the implications of new communication technology. Paper presented at the Wake Forest University International Conference on Argument, Venice, Italy.
Hollihan, T.A. & Riley, P. (2001). Virtual networking to save the environment. Paper presented at the Internet Political Economy Forum, National University of Singapore.
Hollihan, T.A., Riley, P., & Klumpp, J.F. (1993). Greed versus hope, self-interest versus community: Reinventing argumentative praxis in post-free marketplace America. In R.E. McKerrow (Ed.), Argument and the postmodern challenge: Proceedings of the eighth SCA/AFA conference on argumentation (pp. 332-339). Annandale VA: Speech Communication Association.
Huntington, S. (1996). The clash of civilizations and the remaking of the world order. New York: Simon and Shuster.
Hutcheon, P.D. (2001). Can humanism stem the rising tide of tribalism? Paper presented at the Humanist Association of Canada Convention, Winnipeg, Ontario. Retrieved June 17, 2002, from http://www.humanists.net/pdhutcheon/tribalism.htm.
Ivie, R. L. (1974). Presidential motives for war. Quarterly Journal of Speech, 60, 337-345.
Ivie, R. L. (1980). Images of savagery in American justifications for war. Communication Monographs, 47, 279-294.
Ivie, R.L. (1996). The democratic imagination in a republic of fear: A response to Stephen E. Lucas on America’s rhetorical imagination. Paper presented at the Fifth Biennial Public Address Conference, University of Illinois. Retrieved June 17, 2002, from http://www.indiana.edu/~ivieweb/lucas.htm.
Ivie, R.L. (2002; April 17). Terrorism at democracy’s frontier. Paper presented at Manchester College. Retrieved May 2002, from http://www.indiana.edu/~ivieweb.frontier.htm.
Jayasuriya, K. (2002). 9/11 and the new ‘anti-politics’ of ‘security’. Social Science Research Council/After Sept. 11. Retrieved June 17, 2002, from http://www.ssrc.org/sept11/essays/jayasuriya_text_only.htm.
Kastoryano, R. (2002). The reach of transnationalism. Social Science Research Council/After September 11. Retrieved June 17, 2002, from http://www.ssrc.org/sept11/essays/kastoryano_text_only.htm.
Klumpp, J.F.; Hollihan, T.A.; & Riley, P. (2001). Globalizing argument theory. In G.T. Goodnight (Ed.), Arguing communication and culture (Vol. 2, pp. 578-586). Washington: National Communication Association.
Klumpp, J.F., Riley, P., & Hollihan, T.A. (1995). Argument in the post-political age: Emerging sites for a democratic lifeworld. In F.H. van Eemeren, R. Grootendorst, J.A. Blair, & C.A. Willard (Eds.), Special Fields and Cases. vol. 4 of Proceedings of the Third ISSA Conference on Argumentation (pp. 318-328). Amsterdam: SicSat,
Kuran, T. (2002). The religious undercurrents of Muslim economic grievances. Social Science Research Council/After September 11. Retrieved June 17, 2002, from http://www.ssrc.org/sept11/essays/kuran_text_only.htm.
Madison, J. (1788). Federalist Paper #10.
Miller, J.J., & Ponnuru, R. (2001, September 20). The hateful American. National Review Online. Retrieved June 16, 2002, from http://www.nationalreview.com/daily/nprint092001.html.
Open Society Institute. (2001). New Challenges to Open Society. Retrieved June 15, 2002, from http://www.soros.org/911/911_forum_transcript.htm.
Richey, W. (2001, October 15). Arab TV network plays key, disputed role in Afghan war. Christian Science Monitor. Retrieved June 17, 2002, from http://www.csmonitor.com/2001/1015/p1s3-wosc.html.
Riley, P., Hollihan, T.A., & Klumpp, J.F. (1998). The dark side of community and democracy: Militias, patriots, and angry white guys. In J.F. Klumpp (Ed.), Argument in a time of change: Definitions, theories, and critiques (pp. 202-207). Annandale VA: National Communication Association.
Roy, O. (2002). Neo-fundamentalism. Social Science Research Council/After September 11. Retrieved June 17, 2002, from http://www.ssrc.org/sept11/essays/roy_text_only.htm.
Smith, C.S. (2002, May 15). For China’s wealthy, all but fruited plain. New York Times, p. A1.
Talbott, S. & Chanda, N. (2001). The age of terror: America and the world after September 11. New York: Basic Books.
The Talk of the Town. (2001, 24 September). New Yorker. p. 32.




ISSA Proceedings 2002 – Arguing For A Cause: President Bush And The Comic Frame

logo  2002-11. Introduction
On the morning of September 11, 2002, a drama unfolded. It began in the air and ended in flames. Over the course of the day, planes would crash into buildings, individuals would be emotionally and physically injured, thousands would die, and a national symbol would collapse. This ensuing drama would become the single worst case of terrorism to occur on American soil and one of the worst cases of violence in history.
On September 20, 2001, President George W. Bush responded to the terrorist attacks that transpired on September 11. In a speech delivered to a joint session of Congress, Bush argued a position and spelled out a plan that would begin a new social movement that not only involved the United States, but also an international assembly.

The following analysis will first explore the rhetorical situation through the lens of Burke in an attempt to discover why and how this text was dramatized. Additionally, Bush’s motivational apparatus will be analyzed through a Dramatistic perspective by utilizing the constructs of the comic frame and examining the associational/ dissociational clusters used by Bush. Exploration of this text through the lens of the comic frame reveals that Bush reaffirmed the social hierarchy and ultimately gained support for a “War on Terror” through civil disobedience and public liability. Recognition of the associational/ dissociational clusters explores how Bush used symbols to create identification among a national and international audience. Furthermore, they illustrate how Bush named a vague enemy and christened this enemy a clown in order to maintain, rather than eliminate, this enemy’s role in society.

2. A President Challenged
In the days between the attacks and Bush’s address to Congress, millions watched and listened as Bush’s rhetorical techniques began to alter and change. Previously shying from venues that called for an impromptu response, Bush not only began offering personal opinions, but also seemed comfortable in doing so. His rhetoric shifted from guarded to colorful and full of Wild West colloquialisms as he pronounced that he wanted Osama bin Laden “dead or alive” and that he would “smoke them out” (Bumiller & Bruni, 2001).
Rather than curb Bush’s word choice, speech writers and White House Officials decided to utilize this “down home” image to reconstruct the fractured American mythos of invincibility. It is this same rhetorical structure that was applied to the discourse presented to the world on September 20. In addition to being conscious of word choice, Bush was also mindful of his choice of venue (Max, 2001). Choosing to speak in front of a joint session of Congress would provide an air of authority and stability.

3. A Response
Understanding of this text is important for four main reasons. First, nine months have passed and the impact of the terrorist attacks is still not completely known. Thousands of people witnessed these events first hand and millions of people watched the drama unfold over the mass media. With a death toll that surpassed the number of people killed at Pearl Harbor, millions of people have been forced to question the American myth of invincibility.
Second, the audience of this text was vast. Along with the majority of the United States, millions of people worldwide witnessed the delivery of this text. Heads of State either attended, witnessed, or specifically addressed this text immediately after its delivery. More importantly, since President Bush argued the need for unwavering global support and the possibility of an international military response, it was imperative that this text be persuasive on a multinational scale.
Third, the rhetor was under pressure to deliver an effective and multi-layered response. After all, “in a time of crisis, words are key to the presidency” (Max, 2001, 33). Bush needed to console the friends and family members of those lost in the attacks. He needed to comfort fearful Americans while also warning them that future attacks were not unlikely. He needed to rally an international audience and publicly name supporters. He also needed to label an enemy. In addition to these exigencies, Bush also needed to prove his effectiveness not only as a rhetorician but also as an effective leader in a time of crisis.
Not only should this text be examined, it should be investigated from a methodological standpoint that evaluates the effectiveness of the arguer while simultaneously exploring the shape of the social movement. A dramatistic perspective recognizes these aspects as it views the social movement as a drama. Consequently, the impact of social movements and the effectiveness of a current leader would also be studied in a unique manner because of the timeliness of the text’s topic.

4. A Dramatistic Perspective
When Kenneth Burke introduced his concept of Dramatism, he theorized that all life is a drama and that the need for drama is so innate that it can be comparative to food and shelter (Burke, 1969). Burke explains that drama is so fundamental that withholding its magic and mysticism is ultimately a denial of resources that a person needs in order to cope with intense moments. Furthermore, it is the examination of rhetoric that truly exploits the dramatic elements of a situation. It is this exploitation, this unearthing, that reveals the true motivation behind a text.

Burke explains that moments of intense drama often motivate people to “unhinge.” As a result, a person’s motivation for behavior can be found through examination of text. More specifically, the description of associational/dissociational clusters questions how Bush used symbols to communicate his message. This aspect of Dramatism addresses the patterned relationships in a text. It is the arrangement of these terms that allows a text’s plot to unfold and defines the players. Examination of these clusters defines who is good, who is not, and what the future holds for each (Burke, 1969). In order to understand the choice and impact of these terms, this methodology also allows for examination of the rhetor’s frame of reference. Utilizing the comic frame as a perspective provides insight into Bush’s treatment of the social system. Additionally, discovery of Bush’s motives in using these terms reveals how he used identification in an attempt to gain adherence from an international audience.
While there are still many unanswered questions surrounding the events of September 11, a critical analysis of the major argumentative response to these events is not only warranted, but imperative if scholars are to continue to understand the far reaching impacts of public discourse.

5. The Direction of Movement
Over the last several years, scholars have studied the impact of public discourse on the effectiveness of social movements. Critical analysis has not only shed light on the techniques used to motivate groups of people, but to also evaluate the effectiveness of a leader. Furthermore, many scholars have found application of the comic frame useful when attempting to understand the nature of a movement.
Carlton explains that “frames are the symbolic structures by which human beings impose order upon their personal and social experiences” (Carlton, 1986, 447). In other words, a frame of reference will help to unearth a rhetor’s understanding of an event and how she or he has decided upon a specific course of action. Furthermore, frames are decisive. They take sides. For Burke, understanding of rhetor’s frame of reference provides understanding “we derive our vocabularies for the charting of human motive” (Burke, 1937, 92).
The comic frame specifically addresses the formation of social movements by illuminating the contradiction between the public and the private. Burke explains that “a social organization is also public property, and can be privately appropriated” (Burke, 1937, 168. As a result, what is good for the whole is not always good for the parts.
Griffin (2000) takes this a step further to explore the influence of autobiographical elements of the rhetor on text. Not only does the rhetor use the text to define him or herself, but uses these traits to gain adherence with the audience. This is exemplified by the structure of the text as Bush utilizes a series of questions: Who attacked our country? Why do they hate us? What is expected of us? These questions not only voice the concern of Americans, but of their leader as well. In the text immediately preceding these questions, Bush clearly defines himself as an American and begins to use “I” and “we” as synonyms for “Americans.” Consequently, Bush begins the process of identification with the American people.

In addition to the process of identification, Bush also uses this series of questions as means of presenting direction. The Burkean concept of directional substance illustrates how these questions begin to make a distinction between what a person wants to do and want a person thinks that she or he should do. Burke explains that while “one may freely answer a call, yet the call could be so imperious that one could not ignore it without disaster” (Burke, 1969, 32). When individuals begin to act based on this concept, Burke explains that “we get movement as motive” (1969, 32).
When discussing the concept of directional substance, it is important to point out that this is strictly dealing with the future, with guiding the actions of a movement. Directional substance is clearly defined as Bush asks, “What is expected of us?” As individuals choose to or not to follow Bush’s call to “uphold the values of America and remember why so many have come here,” they are forced to identify with Bush’s movement for fear that they may go against the values that they may base their lives upon. Furthermore, use of the directional substance in conjunction with associational/ dissociational clusters enables Bush to not only define a movement, but to also present and reaffirm a social hierarchy.

6. Villain or Clown?
As previously mentioned, Bush was in need of defining an enemy. However, it is important to first examine the difference between villain and clown and comedy and humor. Carlson points that, while the two are often associated with one another, “not all humor is comedy” (Carlson, 1988, 310). The Burkean sense of comedy is that which “reduces social tension and adds balance to our world view.” Within the comic frame, Burke communicates a sense of hope, a renewal of the social structure. Moreover, it takes on a “charitable attitude toward people that is required for purposes of persuasion and co-operation” (Burke, 1937, 166). It is this “charitable attitude: in combination with the need to reaffirm the social order that illustrates Bush’s labeling of the enemy as a clown rather than as a victim.”

Within the confines of the comic frame, Burke distinguishes between the villain and the clown. On one level, the villain is evil. At first glance, it may appear that Bush is clearly defining the enemy as evil. Closer investigation reveals that Bush is inferring that those who are labeled as Terrorists in the Americans/Terrorists cluster a merely mistaken. They have been “debunked” into their choices. Bush explains that the “terrorists practice a fringe form of Islamic extremism” and that the “terrorists’ directive commands them to kill.” It is phrases such as these that infer that these “terrorists” are not truly evil; a “directive” that forces them to make evil choices has misguided them. What they do may be considered evil, but the people themselves are merely set astray.
It is important for Bush to make this distinction for two key reasons. It allows the renewal of faith in mankind. If Bush can find a way to rid the world of this “directive force,” then perhaps he can put an end to the terrorists’ behaviors. This distinction also promotes the myth that Americans are in the moral right and subsequently inherently possess the ability to show savages the error of their ways. “Whether we bring our enemies to justice, or justice to out enemies,” promises Bush, “justice will be done.”
The comic frame of reference also enables Bush to ambiguously define the enemy. Applying the fundamental blame on this “directive” allows Bush to refer to an all-encompassing enemy in different ways. For example, Bush first referred to the “enemies of freedom.” He then referenced “a loosely affiliated terrorist organization.” While these explanations are exchanged with more specific terms such as Al-Qaida and bin Laden, the multiplicity of terms lends itself to an ambiguous definition.

7. Conclusion
Motive of the rhetor is revealed through the application of the comic frame in addition to other Dramatistic elements. It is important to understand that that the rhetor’s motives revolve around maintaining social order. Additionally, it is important to understand the rhetor’s definition of social order. For Bush, maintenance of the social order is upheld when individuals follow the social movement that he has defined. In this specific situation, the social order is reaffirmed as Americans and their allies support the “War on Terrorism.” Naming the enemy in ambiguous terms enables Bush to continue to redefine the terms of this war and consequently control its longevity.
Social movements appear to present a choice but in actuality do not and should continue to be examined. In this specific text, Bush repeatedly offers a choice verbally. However, a choice does not truly exist. In his use of the Americans/Terrorists cluster, Bush chooses sides for the individuals who have fallen into each category. Furthermore, he decides what characteristics allow certain individuals membership into the categories. Since membership is predetermined within the text, Bush does not need to argue for Americans or those who agree with the “American morality” to join the movement.

The need for Bush’s statement and the magnitude of suffering because of the events of September 11 are not questioned in this essay. In moments of crisis, it is often rhetoric that answers the call for guidance and assurance. It is the power of rhetoric that enables a leader who had once been labeled as a poor speaker to rise and deliver what some are calling the most powerful speech of modern times.
But it is also the power of rhetoric to move and motivate people. It is the power of rhetoric that often reminds us of who we want to claim as ours and whom we want to cast into the fire. Just as words reflect our reality, they can also shape and reshape our understanding of the world. In times of crisis, it becomes imperative to understand how rhetors are using symbols and the impact of these messages.
In answering the need for a response, President George W. Bush defined who was to blame, defined a course of action, and labeled sides so as to rally support for a cause. Rhetoric that is often scrutinized to reveal these tactics has historically been those with a decidedly unethical basis. Numerous scholars have examined Hitler’s rhetoric to reveal the unethical construct. And rightly so. But the question must be asked as to how we as scholars should approach a text in which the ethical basis is vague. How do we approach a text when the wounds are still visible and festering? How do we maintain our objectivity when the strands of our moral fiber are inherently woven within the rhetor’s message?
It is these questions that should guide us in the years to come.

REFERENCES
Bumiller, E. & Bruni, F. (2001, September 19). A nation challenged: The President; In this crisis, Bush is writing his own script. New York Times.
Burke, K. (1937). Attitudes toward history. Berkeley: University of California Press.
Burke, K. (1969). A grammar of motives. Berkeley: University of California Press.
Carlson, A.C. (1986). Gandhi and the comic frame: Ad bellum purificandum. Quarterly Journal of Speech. 72, 446-455.
Carlson, A. C. (1988). Limitations on the comic frame: Some witty American women of the nineteenth century. Quarterly Journal of Speech. 74, 310-322.
Griffin, C.J. (2000). Movement as motive: Self-definition and social advocacy in social movement autobiographies. Western Journal of Communication. 64 (2), 148-164.
Max, D.T. (2001, October 7). The making of the speech. The New York Times Magazine. 33-37.




ISSA Proceedings 2002 – Assessing The Problem Validity Of Argumentation Templates: Statistical Rules Of Thumb

logo  2002-1Burden of proof, a central concept in argumentation theory, situates the requirements for good argument within bodies of substantive knowledge and practical action (Gaskins, 1982). To respond to the burden of proof associated with any claim means providing grounds for acceptance that are adapted to a constellation of related beliefs and prior experience. Burden of proof should not be assumed to be a set of logical requirements, but instead should be understood as an outline of what is known so far that might constitute grounds for challenging claims of some particular sort within some particular substantive domain. The burden of proof that structures scientific argument in any field should be expected to change over time, as disagreement over particular claims reveals general grounds for disagreement with whole classes of claims. For this reason, scientific arguments contain myriad allusions to argumentative failures of the past, answering objections no one may actually have, simply because someone could have that objection or has had that objection to some other claim in the past.
Within expert fields of all kinds, and especially scientific fields, the burden of proof to be discharged may evolve over time as new issues emerge from research and theorizing. Among the discoveries of scientific fields are discoveries of things that can go wrong in drawing conclusions about the subject matter. Such discoveries are likely to stimulate the invention of new methods for guarding against the things that can go wrong, including routinized safeguards applied in research procedures (like “double-blind” administration of experiments or use of drug placebos). These routinized safeguards and boilerplate arguments associated with them often come to be understood by scientists themselves as their methods (McCloskey, 1985).
Disciplinary research practices may be seen as a kind of technology of reasoning and argumentation, embodied in new devices (such as statistics) that have been designed to serve an argumentative purpose and that may become interactionally stabilized in scientific discourse. As distinct from natural, commonsense reasoning, disciplined argumentation has a “designed” quality that comes from the tuning of argumentation to the requirements of the subject matter. As pointed out by Walton (1997), the more specialized these become, the more impenetrable they become for anyone other than a specialist. In this paper we illustrate how relatively impenetrable expert practices such as statistical testing can be opened to theoretical analysis, blending concepts and methods from pragma-dialectics with systematic computer simulation of certain designs for arguing.

1. Pragma-dialectics
Pragma-dialectics is a theoretical, critical, and empirical research program built on a view of argumentative discourse as an exchange of speech acts directed to the resolution of doubt and disagreement. Dialogue, the interaction between a protagonist of a viewpoint and an antagonist who questions or disputes this viewpoint, is a central theoretical construct, applied not only to discussion and debate, but also to individual texts occurring within broad controversies. Argumentation is assumed to be a set of methods for isolation and repair of disagreements emerging from virtually any form of practical action, shaped by norms of reasonableness embodied in an ideal model for critical discussion (van Eemeren, Grootendorst, Jackson & Jacobs, 1993).
The underlying critical ideal applies to argumentation occurring in all fields of endeavor, from ordinary conversation (where it serves to regulate misalignment among interactants in belief and action) to technical and scientific discourses (where it serves to regulate change in disciplinary understandings of phenomena). Pragma-dialectical theory asserts a fundamental set of field-independent rules for the conduct of argumentation, and it also acknowledges the existence of specialized rules within individual fields such as law and policy. In particular, any field may have its own associated procedures for evaluating new assertions as they are introduced into a discussion. These are known as intersubjective testing procedures (van Eemeren & Grootendorst, 1984, p. 167).
Intersubjective testing procedures are methods agreed to by discussants in advance of any particular local disagreement, and in canonical pragma-dialectical theory the ITP is part of the bundle of mutually accepted starting points identified in the opening stage of an argument. Both protagonist and antagonist must agree on the sufficiency of the ITP, though if this agreement is not already established, the discussants may make the ITP itself a matter of meta-discussion. When the meta-discussion over an ITP must be conducted by experts external to the primary-level discussion, the ITP ends up having the same strengths and weaknesses as other forms of authority-dependent argumentation.

For the most part, ITPs in expert fields must operate as Walton (1996) describes for other forms of “presumptive argument.” The ITP, once established within the field’s practice, can be applied wherever relevant to produce conclusions that enjoy a very strong presumption. An assertion that might be doubted or contradicted within a discourse, once passed to the ITP, acquires a presumptive status, either as verified or as falsified by the ITP. The acceptability of the ITP does not have to be defended in each occasion of use; what has to be defended is a refusal to accept the results of the ITP as an adequate defense of the tested assertion.
Much depends on the reliability of the ITP, since in many ways it functions as an argumentative ‘black box” that generates presumptions for or against particular assertions. In pragma-dialectics, the reliability of an ITP or any other argumentative move is known as its problem validity (van Eemeren & Grootendorst, 1994). Problem validity (or problem-solving validity) refers to a procedure’s capacity for contribution to the idealized goals of argumentation – that is, to resolution of disagreement on the merits of the competing positions. In commonsense terms, a procedure lacks problem validity if it leads arguers into false conclusions, false consensus, paradox, impasse, or other argumentative failure. Problem-valid procedures contribute to the quality of argumentation, either providing new ways to resolve doubt or offering new protections against missteps.

The idea of problem validity is a bridge between pragma-dialectics as a critical and empirical enterprise and a pragma-dialectical program of design. ITPs and the argumentative forms that develop around them are design solutions to recurring argumentative problems. Any newly proposed ITP or associated argument form might be an advance for argumentative practice within its field, but until its problem validity is known, it should be regarded as a potential design failure.

2. Argumentation Templates
Within expert fields of all kinds, and especially scientific fields, argumentative practice tends to stabilize around ITPs to produce stereotyped forms of demonstration and defense of claims. We will use the term “argumentation template” to refer to these stereotyped forms. These templates function as outlines for the development of an argument, including not only formal and functional qualities captured in the notion of an argumentation scheme, but also procedural and presentational guidance for the arguer attempting to develop a case for a scientific claim, starting from scratch. Clear contemporary examples of argumentation templates are formats for writing research reports or for writing environmental impact statements.
Argumentation templates of this kind are not simply outlines for writing, however. These templates amount to a synopsis of the burden of proof to be met by empirical claims, often defining specific assurances an expert must provide in order to produce an argument that will be convincing to other experts. In scientific fields, the assurances invoked by standard templates generally involve observational and analytic steps, including laboratory procedure and statistical analysis. While the connections between specific concrete research procedures and any particular empirical claim may be quite obscure, these procedures, once widely accepted, allow individual scientists to hand off portions of the burden of proof associated with the claim and to have that burden
Among the most common of scientific handoffs are those involving statistical analysis of observational data. This handoff may occur very literally, as when the researcher delegates analysis to a statistician or to a statistically sophisticated assistant. But even when the researcher conducts his or her own analysis, an argumentative handoff often occurs through the importation of a complex but unarticulated substructure into the empirical argument. In Toulmin’s terms, we would want to regard statistical tests as warrants for drawing empirical conclusions from data; but if a test is treated as a warrant, its backing is an open-ended and possibly not-fully-coherent body of statistical theory that becomes increasingly obscure as the warranting move becomes increasingly common (Gigerenzer et al, 1989, esp. pp. 106-109). Whether the handoff is literal or figurative, then, conventional statistical procedures introduce deep dependencies on authority into argumentation templates. There is efficiency in this if the procedures are good ones, but there is also the risk that the procedure will come to be treated as a black box whose workings are mysterious but whose results are accepted without question. It is quite convenient, in fact, to think of some argumentation templates as actually including black boxes that turn data into conclusions.

There is little doubt that on the whole the growth of statistics has improved our ability to reason about both the natural world and social phenomena, and these improvements have stabilized into highly successful argumentation templates (such as the stylistic and substantive requirements of the APA Publication Manual). However, any particular proposal for statistical analysis may either improve our ability to reason or set it back in some unexpected way. In the rise of statistical thinking over the past several centuries we can see the invention of new safeguards against error, but we can also see that new fallacies get invented right along with nonfallacious moves, and that these two sometimes stabilize into widely applied templates. The emphasis within pragma-dialectics on procedure and procedural rules provides some unusual and powerful tools for examination of these argumentation templates as abstract designs for the management of doubt.

3. Evaluating Problem Validity
Central to establishing the problem validity of any argumentative structure or strategy is examination of how that structure or strategy advances or impedes the abstract goals of critical discussion. In foundational statements of pragma-dialectics, problem validity is a matter of testing a set of rules for their ability to contribute to resolution. Argumentation templates are not exactly rules in the pragma-dialectician’s sense, but rather standard ways of attempting to conform with rules such as those defining the idealized practice of critical discussion. Many argumentation templates come about as ways of invoking or reporting the outcome of intersubjective testing procedures established within an expert field, and the intersubjective testing procedures, in turn, come about as ways of regulating the introduction of new assertions. We can extend the examination of problem validity to any component of argumentation that becomes part of a field’s standard practice.

A general methodology for evaluation of problem validity would include several steps:
(1) reconstruction of the argumentative move to be evaluated, including both formal design features and informal accommodations worked out in practice;
(2) comparison of the generalized output from this move with a critical standard to identify any vulnerabilities; and
(3) investigation of how these vulnerabilities look in actual instances of argumentation.

A noteworthy feature of this methodology, and one that is particularly characteristic of pragma-dialectics, is the emphasis on examination of what results from the practice to be evaluated. Problem validity is about the suitability of an argumentative move for advancing arguments within some practical setting. Problem validity has to do not with the qualities of individual bits of argumentation, but with pragmatic properties of rules or other agreements about how to conduct discussion.

Problem validity has some general affinities with the concept of “ecological rationality” as interpreted within the work of Gigerenzer and “the ABC Research Group” on adaptive thinking (Gigerenzer, 2000; Gigerenzer, Todd, & the ABC Research Group, 1999). Ecological rationality is reasoning that is well-adapted to the environment in which it occurs, taking advantage of the structure of that environment to gain efficiency or reliability. In the ABC research program, shortcut reasoning heuristics and rules of thumb are examined in terms of their success in supporting good decisions. A heuristic may have little or no logical defensibility but still be very successful in its actual use.

Heuristics and rules of thumb are common in all human reasoning, and are often treated analytically as fallacies and biases. But some of these heuristics can be given convincing defense as “fast and frugal” methods for making decisions. Gigerenzer and associates (1999) have shown, using computer simulation of judgments, that supposedly biased judgmental strategies are often beautifully adaptive to information environments with predictable structure. The gist of the ABC group’s argument is that heuristic reasoning is not a poor substitute for either ‘unbounded rationality’ or ‘optimization under constraints,’ but an adaptive response to contexts of choice that are already structured to prefer certain kinds of strategies. Very simple and unreasonable heuristics for decisions under uncertainty can be shown to be ecologically rational, by showing that these heuristics, applied in certain environments, produce good decisions with minimum cost.

The general idea that we may adopt a broad rule for decisions based on its overall productivity has direct relevance to statistical testing, which is broadly understood by scientists themselves as adoption of a decision rule for interpretation of experimental outcomes. The idea that a rough heuristic may prove to be defensible on the same grounds has direct relevance to our specific topic, which is rules of thumb for application of statistical tests. Especially relevant, though, is the idea that we might test any decision-making strategy, including an ITP, by simulating its use in conditions controlled through explicit modeling.

4. Rules of Thumb for Application of Statistical Tests
Much empirical work in the social sciences involves statistical tests of the differences among groups of observations. A significant result is taken as evidence of a difference, a relationship, or an effect, allowing for a very simple argumentative structure to apply in many cases:
Effect E is indicated by test T.
T rarely produces false indications when properly applied.
T has been properly applied.
Therefore (presumptively), E.

For example, an experiment on alternative teaching strategies might involve testing differences in exam scores for several groups of students, or an experiment on alternative persuasive strategies might involve testing differences in responses for several audiences. Statistical tests suitable for these purposes are well known and include t-tests for differences between two group means and F-tests for differences among three or more means.

The idea that “T rarely produces false indications when properly applied” could open a disagreement space of its own, but it rarely does within social science practice. For purposes of empirical argument within research contexts like these, a researcher who has collected observations of a certain kind may defend a claim about an effect such as a group-to-group difference simply by presenting results of a standard test such as a t-test or an F-test. The justification for the test itself is typically external to the empirical field in which the test is applied, having been delegated sometime in mid-1900s to statistics as a subfield within mathematics (Gigerenzer et al., 1989, esp. pp. 115-118). That T rarely produces false indications when properly applied is generally taken for granted, though the researcher is then under obligation to provide assurances that the test has in fact been properly applied. If these assurances can be given, letting the test function as an unquestioned black box is as reasonable as the theory backing the test.

Among the assurances a researcher must provide are assurances of the quality of measurement, the quality of the observational sample, and the fairness of the comparative design. These assurances, while interesting, have no further extension in our case study. The assurances that will concern us most are those that condition the interpretation of the results of the statistical test: those commonly known as statistical assumptions (For an overview of these assumptions, see any good textbook treatment of the analysis of variance, such as Keppel, 1991, esp. ch. 5). The common F-test for differences among group means assumes that observations taken within the groups are drawn independently of one another from a population or more than one population whose elements have normally distributed values on the variable measured as an outcome of the experiment. These are commonly known as the independence assumption and the normality assumption, respectively. The test also assumes that if several populations are sampled, their members are equally heterogeneous. This is commonly known as the homogeneity of variance assumption. If any of the assumptions are violated, the acceptability of the statistical test itself may be called into question.

The statistical assumptions are very difficult to verify in any actual research situation, and for this reason researchers cannot usually provide these assurances directly. Assurances that the assumptions are met for the actual occasion of use must be obtained through examination of the same data as used in the test itself. Hence, the argumentation templates that have evolved around significance tests for group differences include specialized procedures for evaluating the reasonableness of each assumption, by testing for “violations” of various assumptions. Since the assumptions are in fact often violated, the actual use of significance tests is adjusted over time in response to decontextualized studies of the behavior of the statistical tests known as “robustness studies.” The purpose of a robustness study is to determine how badly a test behaves under varied deviations from the ideal observational situation. A test that works well despite violations of assumptions is said to be robust to those violations.
The behavior of a statistical test is normally assessed in terms of its ability to control the rate at which errors of inference are made from data. “Type I error” is concluding that a difference exists when it does not, while “Type II error” is failing to find authentic differences. All sample data show differences of some kind, and the function of a statistical test of observed differences is to differentiate between differences that reflect real effects and differences that reflect only chance variation within a sample. Type I error can be set to any desired rate through designable features of tests; by broad and stable convention, Type I error is controlled at 5%. In other words, tests for all kinds of differences are structured so that, if there are no true differences to be found, the test will (falsely) find differences in no more than 5% of the cases.

Type I error (and also Type II error) may vary dramatically from what the scientist expects if the assumptions required by the test are violated – but then again, they may not. What happens to Type I error rates if the observations come from something other than a normal distribution? That is the kind of question answered by robustness studies. A test that has been shown to be robust to a certain kind of violation offers the individual researcher a boilerplate rebuttal for criticisms related to the violated assumption, which can also be woven preemptively into an argumentation template to implement a structure like the following:
Effect E is indicated by test T.
T rarely produces false indications when properly applied or in other situations
S1, …(Si), … SN.
Si obtains.
Therefore (presumptively), E.

Often, these boilerplate rebuttals get appropriated into routine scientific practice as rules of thumb. Rules of thumb are common enough in statistical reasoning that van Belle (2002) recently summarized 99 such statistical and methodological rules (e.g., “make a sharp distinction between experimental and observational studies;” “randomization [of experimental subjects into groups] puts systematic sources of variability into the error term;” “consider the size of the population affected by small effects;” and “beware of pseudoreplication“). van Belle provided a basis for each rule, an illustration of how it works in statistical reasoning, and extensions of the rule. Some rules of thumb were formed based on statistical and methodological theory (e.g., the principles of randomization can be traced to Fisher’s, 1935, work on experimental design) and others arise from practical circumstances when statistics are applied (e.g., epidemiological work shows that small effects are important when researchers are dealing with large populations – a small effect of a disease in large number of people may still mean that many will die).

Rules of thumb related to assumptions enter social science practice through textbooks, through summaries of robustness research appearing in textbooks and research handbooks, and through explicitly argued proposals for handling specific kinds of problems. For example, various texts point out that “heterogeneity of variance” is a benign violation so long as the variance of the most heterogenous group is no more than three times the variance of the least heterogeneous group (see, e.g., Keppel, 1991). The basis for this rule of thumb is a body of robustness studies, one showing little harm from heterogeneity on the order of 3:1, and others showing considerable harm from much larger differentials. Although the empirical analysis provided by robustness studies gives good grounds for confidence in F-tests performed on mildly heterogeneous groups and equally good grounds for concern about in F-tests performed on horrendously heterogeneous groups, the 3:1 rule of thumb is itself a product of happenstance in robustness researchers’ choices of conditions to examine.

Notice that just as we can examine the behavior of a specific statistical test as it is applied in any desired conditions, we can also examine the behavior of the associated rules of thumb. So long as the rule of thumb can be stated as a decision rule applied systematically, it can be modeled using the same kinds of computer simulation methods used in robustness studies (and in studies of the ecological validity of heuristics).

5. Evaluating a Rule of Thumb for Non-independent Data
Independence of observations, as noted above, is one condition or rule stipulated for many statistical tests (e.g., independent samples t-tests, chi-square tests, F-tests for independent group means, and so on). When observations are collected in pairs or groups, it is generally acknowledged that it is inappropriate to treat them as independent. As Kenny and Judd (1986) demonstrated, treating scores for individuals within dyads or groups as independent risks bias in statistical significance tests, with the amount and direction of bias varying with the amount of dependency – that is, the size of the intraclass correlation among the participants within groups – and the experimental design. Non-independence occurs when scores are correlated and may result from natural associations between participants in a study, such as when intact dyads (e.g., parent/child, partners in a relationship, or coworkers) are used as participants. Kenny and Kashy (1991) noted that these forms of non-independence are common in research on interpersonal relationships.
Non-independence also can result from the particular circumstances of the data collection, such as when groups of participants within a study respond to the same stimuli (see Jackson & Brashers, 1994). For example, in research on social influence, it is necessary to manipulate variables by embodying the contrast of interest in concrete materials: for example, by writing a message and varying it in some respect to produce two or more versions that represent a treatment contrast. In an experiment on the effects of authority on persuasion, a variety of messages (e.g., on AIDS, crime prevention, voting, and immigration policy) might be altered to have two versions that vary in uses of authority – for example, putting forward assertions attributed either to authentic authorities or to non-authoritative sources. In a completely randomized design, participants in the experiment read one or the other version of a message, and then complete an attitude or behavioral intention measure to determine if there are different responses to messages differing in their use of authority. Multiple replications of the treatment contrast are used to allow inference from individual messages (e.g., on AIDS or immigration) to broad, categorical differences in message strategy (e.g., to the benefit of citing authorities). But these replications are a potential source of non-independence, because subgroups of participants are responding to common stimuli. In a replicated design, where observations fall into subgroups defined by replication levels, the observations within one subgroups are more related to one another than to observations taken within other subgroups, and these relatedness can extend across the treatment levels as well (e.g., relating the individuals who got the authoritative version of the AIDS message to the individuals who got the non-authoritative version of the same message). If the replication factor is ignored and all observations classified only with respect to other factors (e.g., the authority treatment factor), then the assumption that observations are independent may be violated, because observations correlated due to common stimuli would be treated analytically as though uncorrelated. Replications, in other words, may become a “hidden factor” in a design, resulting in all subjects getting one treatment being considered one large group rather than a number of subgroups characterizable in terms of which particular experimental materials they received.

When non-independent observations are treated as though they were independent, the Type I error rate for the test is no longer known; it is no longer assured, that is, that “test T rarely produces false indications.” The rate of Type I errors may be much higher than expected, a problem known as “alpha inflation” (since the rate set for Type I error is known as “alpha”). Barcikowski (1981) demonstrated through statistical simulation that treating observations from groups nested under treatments as though the observations within treatments were independent leads to substantial alpha-inflation (more Type I errors than we should expect with a set alpha-level), with the size of the alpha-inflation increasing with the size of the intraclass correlation and the number of observations per group. Kenny and Judd (1986) examined both within-group and between-group dependencies and found that both forms of non-independence could bias a test, though the direction of bias (alpha-inflation or alpha-deflation) differs by type of non-independence.
Regardless of how observations are collected, however, an absence of correlation among observations allows the test to perform just as expected. If, despite dependent sampling, the intraclass correlation is zero, or if there are no within-group or between-group correlations, the test of differences among means will have the nominal Type I error rate. Noticing this fact, some experts have proposed rules of thumb for the handling of potentially non-independent data that allow direct application of a test when there is no evidence of non-independence but require adjustments or alternate tests when evidence of non-independence appears. In general, non-independence can be handled by taking the “hidden factors” responsible for the non-independence explicitly into account. For example, when experimental observations can be subdivided not only by treatments but also by replications, taking replications into account as a partitioning factor eliminates the non-independence among the individual observations within groups.
Kenny and Kashy (1991) described a rule of thumb for dealing with possible non-independence and for deciding what test to use to analyze data collected in pairs, structured as a two-step testing procedure. At step 1, a test for non-independence is conducted, using a very liberal criterion to avoid Type II error. At step 2, the test that is conducted depends on the outcome of the preliminary test: if the preliminary test shows no evidence of non-independence, the main analysis can be conducted as though the observations were fully independent, while if evidence of non-independence appears, some alternative form of analysis is required. Others (e.g., Forster & Dickinson, 1976) have proposed similar rules of thumb for other possible sources of non-independence.
Evaluating this rule of thumb is not quite as straightforward as evaluating a statistical test, since the rule of thumb depends on modelling a judgment and not just a distribution of outcomes. An annoying feature of rules of thumb is that they tend not to be applied with complete consistency, but with a certain amount of opportunism varying according to the individual taste of the researcher. Nevertheless, if we want to evaluate the rule of thumb itself, and not the behavior of the individual researcher, we may make some progress by formalizing the rule and modelling what would happen if it were applied with complete consistency within a community of researchers.

Adapting methods common in robustness studies, we developed a simulation of two kinds of situations producing non independent data:
(1) situations in which all of the members of a subgroup are assigned together to one treatment condition in an experiment, and
(2) situations in which the members of a subgroup are divided between two treatment conditions. A complete technical report of the simulations is available elsewhere (Jackson & Brashers, 1993).

Very briefly, though, the simulation involved random generation of data with specific features, and application of testing strategies to these data to produce empirical Type I error rates. Varying the size of the simulated experiments (number of groups and number of observations per group) and the magnitude of the intraclass correlations, we built into the simulation three contrasting analytic strategies: an unconditional test treating all observations as independent, a conditional testing strategy that models the consistent application of the rule of thumb described above, and an unconditional test in which the source of non-independence (e.g., subgroups) is included as an explicit factor. Using computer algoritms based on SAS functions, we ran thousands of simulated experiments of each type and size, tabulating the frequency with which each testing strategy produced a statistically significant result.

Consistent with earlier findings, the unconditional test was biased, with the magnitude (and direction) of bias determined by the magnitude and form of non-independence and by study size. Type I error was enormously inflated under some conditions that are actually fairly common in social science research. Using the conditional testing strategy, this bias was substantially reduced, but not eliminated. The reason for this is that the test for non-independence may fail to detect the non-independence, even when it is built into the composition of the observations to be analyzed (a problem of Type II error). The “presumption” is misplaced in any such testing strategy, since the data are presumed to be independent unless it is shown that they are dependent. An unconditional test built on a presumption of non-independence among observations within subgroups behaves exactly as it should, producing significant results in 5% of all experiments.

Jackson and Brashers (1993) noted that any procedure constructed in this way will be vulnerable to the same “fallacy of misplaced presumption.” If group effects are present in the population, any test conducted ignoring the group effect will be biased, so we should treat related observations as dependent whenever we are not confident that group effects are absent. But the testing strategy above generates individual as the unit of analysis whenever we are not confident that group effects are not present. The presumption should favor treating group data as dependent (since this results in an unbiased test regardless of the size of the group effect), but the policy outlined awards the presumption to treating grouped data as independent by requiring positive evidence of group effects to generate the choice of group as the unit of analysis. While preferable to an incorrect test applied unconditionally, the conditional testing strategy is inferior to a consistent policy of conducting a test that allows interdependence among observations.

We could describe this fallacy of misplaced presumption in more familiar terms as a version of argument ad ignorantium, since independence is considered to have been established through absence of clear evidence of non-independence. Structurally, the argument form looks something like the following:
E is indicated by T.
T rarely produces false indications when properly applied.
T is properly applied if no assumptions are violated.
No assumptions are (known to be) violated.
Therefore, E.

But the fallacy of misplaced presumption differs from an ad ignortium form arising from simply ignoring the possibility of non-independence. Its defining difference is in the practical decision to treat data as independent whenever a test for dependence fails to show “indications” of dependence.

The simulation methods used to evaluate the research policy suggested by the rule of thumb can be adapted to evaluation of individual empirical arguments. The observational and analytic choices can be modeled by creating a simulated experiment of the same size and design and randomly generating many repetitions of the experiment with varied assumptions about the underlying process. Brashers (1994) showed this method in his critical examination of research practices in communication and psychology, modelling dozens of studies making varied analytic decisions about experimental replication factors. For example, Brashers simulated the procedures of Fein and Hilton’s (1992) study of consistency between attitudes toward groups and attitudes toward individual members of those groups. Fein and Hilton used the two-step testing strategy to decide whether to include experimental replications as an explicit factor or to “hide” the factor and treat all observations within groups as independent. The initial test showed no significant effects involving the replications factor, so following the policy suggested by the rule of thumb would mean going forward with analysis ignoring the potential non-independence among observations sharing assignment to the same replication. Using evidence from the published results to set upper bounds for certain kinds of dependency, Brashers showed Fein and Hilton’s testing strategy to involve much more than 5% chance of Type I error.

6. Conclusion
In the social and behavioral sciences, statistical tools and techniques figure heavily in empirical argumentation templates. But empirical social science, despite its visible adherence to templates incorporating formal requirements of proof, is far less formal in its methodology than is commonly noticed. A careful and rigorous enforcement of statistical standards of proof in empirical demonstration is blended with a casual and pragmatic acceptance of rules of thumb and other ad hoc solutions to problems of application. In itself, this is no critique of empirical argumentation; these rules of thumb may be quite reasonable, but that must be shown.
We might speculate that statistical rules of thumb are highly disciplined versions of fast and frugal heuristics, not defensible in the abstract, but effective and efficient in practice. Unfortunately, this save is not possible for the argumentative move examined in this study, since regardless of whether dyadic and grouped data are mostly independent or mostly interdependent, nothing much is gained by applying this rule of thumb.

Our point, however, is not merely to mount an objection to a particular rule of thumb, nor to suggest that we always avoid rules of thumb. Rather, what we have tried to show is an approach to the investigation of problem validity within disciplined argument fields. Other rules of thumb for statistical reasoning will fare differently when evaluated for their contributions to empirical argumentation. As it happens, though, in the case examined here, there is a readily available analytic strategy that can be shown to be uniformly acceptable, regardless of whether data show clear evidence of non-independence. In challenging the problem validity of one strategy, we also vouch for the problem validity of an alternative.

REFERENCES
American Psychological Association (2001). Publication Manual of the American Psychological Association (5th ed.). Washington, DC: Author.
Barcikowski, R. S. (1981). Statistical power with group mean as the unit of analysis. Journal of Educational Statistics, 6, 267-285.
van Belle, G. (2002). Statistical rules of thumb. New York: John Wiley & Sons.
Brashers, D. E. (1994). A critical review of the design and analysis of experiments using replications factors. Unpublished dissertation, University of Arizona.
van Eemeren, F. H., & Grootendorst, R. (1984). Speech acts in argumentative discussions. Dordrecht, The Netherlands: Foris.
van Eemeren, F. H., & Grootendorst, R. (1994). Rationale for a pragma-dialectical perspective. In F. H. van Eemeren & R. Grootendorst (Eds.), Studies in pragma-dialectics (pp. 11-28). Amsterdam: International Centre for the Study of Argumentation.
van Eemeren, F. H., Grootendorst, R., Jackson, S., & Jacobs, S. (1993). Reconstructing argumentative discourse. Tuscaloosa, AL: U. of Alabama Press.
Fein, S., & Hilton, J. L. (1992). Attitudes towards groups and behavioral intentions towards individual group members: The impact of nondiagnostic information. Journal of Experimental Social Psychology, 28, 101-124.
Fisher, R. A. (1935). The design of experiments. Edinburgh: Oliver and Boyd.
Forster, K. I., & Dickinson, R. G. (1976). More on the language-as-fixed-effect fallacy: Monte Carlo estimates of error rates for F1, F2, F’, and min F’. Journal of Verbal Learning and Verbal Behavior, 15, 135-142.
Gaskins, R. H. (1992). Burdens of proof in modern discourse. New Haven: Yale U. Press.
Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford: Oxford U. Press.
Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J., & Krüger, L. (1989). The empire of chance: How probability changed science and everyday life. Cambridge: Cambridge U. Press.
Gigerenzer, G., Todd, P., & the ABC Research Group (1999). Simple heuristics that make us smart. Oxford: Oxford U. Press.
Jackson, S., & Brashers, D. E. (1993, May). Assuming independence when dependence isn’t evident: A fallacy of misplaced presumption. Paper presented at the meeting of the International Communication Association, Washington, D. C.
Jackson, S., & Brashers, D. E. (1994). M > 1: Analysis of treatment x replication designs. Human Communication Research, 20, 356-389.
Kenny, D. A., & Judd, C. M. (1986). Consequences of violating the independence assumption in analysis of variance. Psychological Bulletin, 99, 422-431.
Kenny, D. A., & Kashy, D. A. (1991). Analyzing interdependence in dyads. In B. M. Montgomery & S. Duck (Eds.), Studying interpersonal interaction (pp. 275-285). New York: Guilford.
Keppel, G. (1991). Design and analysis: A researcher’s handbook (3rd ed.). Englewood Cliffs, NJ: Prentice Hall.
McCloskey, D. N. (1985). The rhetoric of economics. Madison, WI: U. of Wisconsin Press.
Toulmin, S. E. (1958). The uses of argument. London: Cambridge U. Press.
Walton, D. N. (1996). Argumentation schemes for presumptive reasoning. Mahwah, NJ: Erlbaum.
Walton, D. N. (1997). Appeal to expert opinion: Arguments from authority. University Park, PA: Pennsylvania State U. Press.




ISSA Proceedings 2002 – Two Conceptions Of Openness In Argumentation Theory

logo  2002-1One of the central values in dialectical models of argumentation is that of openness. Sometimes this value is embodied in the form of specific rules – such as those in the pragma-dialectical code of conduct (van Eemeren & Grootendorst, 1992) which specify such things as rights to challenge, burden of proof, and so forth. But usually openness has a more informal quality to it. In any case, the concept lacks the precision one finds with, say, the concept of inferential validity in logical models of argumentation where we find not only well-defined exemplars of deductively valid forms of inference, but also a relatively clear definition of validity in general. It is perhaps because of this informal quality that argumentation scholars have not fully appreciated how the value of openness is used in two distinct ways when evaluating the quality of argumentative conduct. In one way, the concept of openness reflects an epistemic orientation. In the other way, the concept of openness takes on a more socio-political orientation. This paper spells out these two different senses of openness, articulates their rationales, and then explores some of the implications of this distinction for understanding the nature of reasonable argumentative conduct.

1. Two Functions of Argumentation.
In large part, these two conceptions of openness in argumentation theory are responsive to two different functions of argumentation: a cognitive function and a social function. So, to get a better lock on the two sense of openness, we begin by considering these two different functions. There has always been a tension in argumentation theory between a cognitive understanding of argument and a social understanding of argument. Logical approaches most clearly exhibit a preference for emphasizing the cognitive function: that of belief management. Logical approaches have a tendency to reduce the argumentative function to processes of individual reasoning – so much so that not only are notions of interaction and audience easily erased from the picture, but discourse itself is largely stripped away until only something call ‘propositions’ remain. But whether or not such a reduction seems prudent, it does isolate this cognitive function of argumentation. Argumentation does clearly have a truth-testing function. It is this epistemological aspect that dominates the study of argument in philosophical traditions. And this concern is quite proper. This concern derives from the very structure of accountability and reason-giving that forms an integral basis for ordinary language uses of argument.

Rhetorical approaches, alternatively, most clearly exhibit a preference for emphasizing the social function of argumentation: that of disagreement management. Rhetorical approaches have a tendency to reduce the argumentative function to processes of social influence and conflict resolution – so much so that virtually any form of persuasion is included within the scope of the concept, so that any symbolic process (even pictures and music) have sometimes been claimed to be argument (e.g., Fleming, 1996; Shelly, 1996). Again, whether or not this expansion seems fruitful, it does emphasize this social function of argumentation. And argumentation does have a clear social function of disagreement management. It is this social aspect of argument that dominates its study in communication, political science, and the social sciences generally. And again, this attention is quite proper. It is crucial to theories of democratic decision-making and conflict management.

Now, ultimately, careful consideration of these two functions of argumentation reveals that neither really operates independently of the other. And neither has any clear analytic or evaluative priority. The cognitive demands of argumentation on individual belief make a claim on individual belief in a way that is implicitly social. If an argument is sound for one person, it should be sound for all. In fact, it is this universal projection of reason that gives argumentation its normative claim on the belief of any particular individual. Likewise, the social process of conflict management or persuasion presupposes a particular kind of cognitive functioning. Differences of opinion are to be resolved, consensus is to be achieved, by submitting standpoints to the demand of reasoned justification and public accountability. Arguments should be persuasive only where they can be shown to be inferentially adequate. Still, while the cognitive and the social functions of argumentation do not exist independently, they can be distinguished analytically. And the two senses of the value of openness reflect these two difference functions of argumentation.

2. Two Senses of Openness.
Well, what are the two senses?
On the one hand, openness can be taken as an epistemic value. Openness here means something like open-mindedness toward different ideas. It involves a willingness to entertain competing viewpoints. It requires a tentativeness, a non-dogmatic attitude that acknowledges the possibility of error or at least of improvement in thinking. Openness in this sense involves a willingness to entertain criticism, to engage in careful scrutiny of all sides of a position, to encourage efforts at falsification. Moves that discourage entertainment of alternative standpoints, that obstruct full testing of their rationales, or prevent serious consideration of potential objections violate this sense of openness. Openness in this first sense, then, amounts to a call for freedom of inquiry.

On the other hand, openness can be taken as a socio-political value. Openness here means something like open-access to social decision-making and public choice. It involves a willingness to include all interested parties. It requires a respect for the autonomy of individuals, allowing them the right to self-determination. Openness in this sense involves a tolerance of social differences, a non-parochial attitude that accepts and even welcomes social diversity. Moves that discourage active representation of parties’ interests and viewpoints, that coerce compliance, or otherwise restrict participation in processes of mutual influence are moves that violate this sense of openness. Openness in this second sense, then, amounts to a call for freedom of participation.

These two senses of openness are best thought of as solutions to two different kinds of problems that arise in argument design. The call for freedom of inquiry is a particular kind of solution to the problem of how to maximize the discovery of true belief and to minimize adherence to false belief. The call for freedom of participation is a particular kind of solution to the problem of how to maximize freedom of choice and to minimize imposition of choice on others.

Consider first the epistemic problem. This is the problem of how do we know when our claims are true (or false), or at least, when should we accept a claim as true (or false)? The dominant answer in argumentation theory has gone something like this: We should accept a claim as true (or false) when it has been supported (or refuted) by good arguments. And then argumentation theory gives some general account of what a good argument is or provides specific types of good arguments. Johnson and Blair’s (1994) well-known standards of premise acceptability, strength, and relevance illustrate the former sort of account. Models of syllogistic reasoning or tests of argument from expert opinion or authority (e.g., Walton, 1996) are examples of the latter sort of account. So, if an argument meets these standards or conforms to these models, it is a good one and we should accept its conclusion. If it doesn’t meet these standards, we punt.

Now, the problem with this kind of answer has always been that application of these standards or model forms of reasoning is notoriously difficult. How, exactly, do we decide that an argument is a strong one or that the premises are acceptable? How do we decide that the critical tests for argument from expert opinon have been satisfied? Or how can we be sure that the model form applies to this particular case? And what do we do if no clear model seems to apply? Even more importantly, how do people involved in the argumentation decide this? When can they be said to have made a reasonable judgment?

If we cannot easily answer the question of how to assess the quality of an argument or a case as a whole or cannot give an altogether clear answer to the question as to when it is reasonable to accept or reject a position, one thing to do is to ask a different question. Ask instead, are the procedures reasonable by which these judgments are made? To what degree do we have reason to trust the decision-making process?

And here is where the epistemic value of openness comes into play in dialectical models of argumentation. In a sense, dialectical models kick epistemic problems upstairs to the meta-level. They try to finesse the issue by appealing to the openness of the decision-making process. On this account, the best arguments and most secure standpoints are those that have been subjected to the most critical scrutiny, that have taken into account the most comprehensive body of information, that have been considered against the broadest range of alternatives, that have answered the most determined objections, that have faced and overcome the most skeptical resistance (Jacobs, 2000). In other words, we can best trust decision-making that best encourages free inquiry. So, that’s the rationale for valuing openness in an epistemic sense.

Openness in the sense of free participation addresses a quite different problem. The socio-political problem is the problem of how do we cultivate individual autonomy (freedom of choice) under conditions of social interdependence? The traditional answer has been to appeal to democratic deliberation, systems of engagement that are tempered by mutual civility and respect. Such systems must manage the following paradox of human social life: To the extent that persons are recognized as autonomous agents who know their own best interests (or at least have a right to decide for themselves what they want to do), people should be given the power to exercise control over their own lives. But in pursuing self-interests, people inevitably risk exercising control over others. Directly or indirectly the pursuit of personal wants has consequences for other people and their power to pursue what they want. Thus, there is the need to coordinate individual interests, but to do so in a way that provides for voluntary, informed consent. There is a need to find a way to give people personal power over themselves without giving them power over others. How is that to be done?

At least one way to do this is by providing deliberative forums that deploy argumentation. Argumentation, on this account, can provide the impartial, balanced procedures that allow interested parties to enter into a process of mutual influence and consensus decision-making. This is the kind of idea behind much of the contemporary discussion of Habermas’s notion of the public sphere (Goodnight, 1982; Goodnight & Hingstman, 1997; Habermas, 1989) or Rorty’s appeal to conversation (Schudson, 1997; Willard, 1989: 233). But it is also a motivating concern behind more practical and concrete models of deliberation having to do with democratic procedure, legal adjudication, or dispute mediation. Free and voluntary submission to a system of public accountability creates the mutual opportunity for a kind of social influence that preserves free choice.

But any such system only works to the extent that all parties are allowed access and given full and equal opportunity to participate in a process of mutual influence. Exclusion of parties and restriction of their means of participation creates undemocratic decision-making. And here is where the value of openness in the socio-political sense comes into play in dialectical models of argumentation. People – not just ideas – must be given free and full opportunity to influence a decision, and the autonomy of their personal decision-making must be respected. Notice here that a concern for power in argumentative discourse and the distortions it brings to social relations is not primarily a concern for its epistemic consequences (though there may also be such consequences). Nor is this concern extrinsic to argumentative analysis; the social quality of argumentation is an intrinsic quality. Deliberation must be conducted in a way that neither closes off entry into the influence process nor coerces acceptance of any particular decision.

3. Tensions Between the Values of Openness.
Under ideal circumstances these two values of openness converge and complement one another. It is pretty easy to see that opening deliberation and debate to the broadest circle of people also increases the diversity of viewpoints, elicits a broader range of objections, criticisms, rebuttals, and refutations, and in general improves the chances of uncovering error and discovering the best case. So politically open systems enable epistemically open decisions. Also, it should be clear that being maximally open to different ideas and opinions makes it less likely that interested parties will be overlooked or excluded and makes it more likely that interested parties will be given the fullest opportunity to make their case, to influence the opinions of others, and to have their own interests respected. So epistemically open systems enhances politically open decision-making.

But that is under ideal circumstances. Under less than ideal circumstances these two values may conflict and compete, especially as arguers deploy argumentative procedures to correct or get around defects in the circumstances for deliberation. For example, a precondition for epistemic openness is participant competence. A precondition for socio-political openness is participant interest. It is quite possible to have politically interested parties who are not epistemically competent. And it is quite possible to have epistemically competent parties who can make no clear claim to a social interest. So, in the first case, it is common enough to find deliberations in which opinions are downplayed or dismissed or participation is closed off altogether on grounds of incompetence. Any time that we test sources for expertise or rely on argument from authority we in effect do this. Likewise, for the second case, it is common enough to find deliberations in which participation is limited to parties with a direct interest in the case at hand. Third party dispute mediation, bargaining and negotiation processes, and various kinds of political and personal conflicts are often restricted in just this way. When we award special weight or respect to personal narratives or subjective experiences, we often do so on the basis of personal interest and not special expertise.

Or again, consider the way in which temporal constraints on deliberative processes may lead to trade-offs between epistemic and socio-political openness. As Jean Goodwin (personal communication) has pointed out, the use of “town hall meeting” formats for talk shows on radio and television must make decisions between opening the show to the broadest range of participants or exploring in-depth any particular viewpoint. Allowing minimal time for any audience or call-in participant to present their views, maximizes participation. But it minimizes the chance to critically scrutinize any participant’s position. Maximizing the time a host spends interrogating a participant allows for more careful understanding and assessment of the participant’s standpoint, but limits the range of people who have access to the floor. A similar trade-off can be seen in the allocation of time to the studio or viewing/listening audience and to experts who are also frequently present on such shows. Presumably, expert contributions improve the quality of the critical questioning while audience contributions expand public participation.

Finally, consider the way in which epistemic and socio-political openness interact in the following concrete case. This is an advertisement from the December, 1990 issue of Reader’s Digest. It appeared at a time when the United States Congress was debating funding of NASA’s request for a permanent space station. The text of the advertisement is reproduced below.

(1)
Innovation
A WALK ON THE MOON
LET HIM PLAY IN THE SUN.

For years, Stevie Roper didn’t have hope for a normal life. He was born without sweat glands, a disease called hypohidrotic ectodermal displasia, or HED. Without a natural cooling system, Stevie is susceptible to heat exhaustion or stroke; so activities most children take for granted are life-threatening.

Today though, Stevie has a “cool suit” that circulates chilled fluid over his body. It was originally designed in a 1968 NASA program to protect astronauts on the moon. Now it enable Stevie and other HED children to live like normal kids again.
The cool suit story is a classic example of space technology’s tangible impact on our lives. And it’s one reason Space Station Freedom is so crucial. As the next step in America’s space program, Freedom will be a permanently occupied laboratory for medical, scientific and industrial research not possible on Earth.
But the space station needs your support. Without it, other life-saving innovations may go undiscovered. Write Congress. Tell them you want Freedom launched.

Beneath the text is the logo for Lockheed along with the phrase “Giving shape to imagination.” The text is set down the left side of the page, alongside a picture of a cute, somewhat pudgy young boy (presumably Stevie Roper). He is dressed in a Little League baseball uniform and is standing in what could be outfield grass. In his left hand is a baseball glove, raised head high, containing the baseball he has just caught. His eyes are closed, which may be because he is not very practiced in playing catch or it may be from the bright sunlight that shines all down the left side of his body. There is no sign of a “cool suit,” though presumably he is wearing it .

While Lockheed clearly has a financial interest in whether or not to fund “Space Station Freedom,” we can also notice that the ad represents the standpoint of another interested party. This is the group of potential future Stevie Ropers – people who might benefit from Space Station Freedom technology in a way similar to how Stevie Roper benefited from the Apollo Space Program technology. Regardless of whether or not Lockheed is being cynical or opportunistic here in their representation of potential Stevie Ropers, their argument does provide a way for a group of people to have a voice who might otherwise be ignored. After all, this group of people may not even yet exist. And even if they do now exist they have no way of knowing who they are (since they will be defined by as-yet-undiscovered technologies that may help their conditions). So, Lockheed is making more than just an updated argument for domestic space technology spin-offs like Tang, Teflon, or microwave ovens. This is a clear appeal to include in the decision-making an otherwise disenfranchised group, a group who surely has a legitimate interest in the question of whether or not to fund Space Station Freedom. In terms of the socio-political sense of openness, this advertising strategy ought to be seen as a good move that improves the quality of the deliberative process.

But how does this means of inclusion affect the openness of deliberation in an epistemic sense? Here a more equivocal assessment is probably called for. On the one hand, representing the interests of these parties introduces for consideration an issues that might be otherwise overlooked or to easily dismissed – the issue of domestic benefits from space technology. This is not a topic that comes readily to mind when imagining the reasons for launching space stations into orbit around the Earth. On the other hand, the way in which these interests are represented may have some decidedly deadening consequences in terms of critical scrutiny. The very way in which this otherwise disenfranchised group of potential future Stevie Ropers is injected into the deliberations may discourage doubt or healthy skepticism. The personal story of Stevie Roper may deserve special weight in the social sense of highlighting a claim to participation in the decision-making, but it does not necessarily establish special privilege in any epistemic sense. Yet the one dimensions easily bleeds into the other.

This story is an emotional appeal of sorts – embodied in the form of a story of a boy who just wants to go out and play baseball like all the other kids. By aligning Stevie Roper with Space Station Freedom, any critic of the project may well be reluctant to appear to be opposing Stevie Roper’s happiness. Or at least a reader might not carefully consider alternative ways of discovering Stevie Roper’s cool suit. The problem with this argumentative strategy is that a potential critic is easily projected to be callous and insensitive. This ad has lurking behind it a subtle message: Would you have let Stevie Roper die? Would you have denied him this small happiness? As a result anyone considering the issues may be less likely to raise a question like the following: If we took the more than $100 billion that will go into developing and building Space Station Freedom and spent it directly on domestic technology development would we maybe get Tang, Teflon, microwave ovens, and cool suits – and then some? Now, maybe that question can be asked anyway, but this ad surely makes it less likely that the question will be asked or pressed.

In any case, the point should be clear: socio-political openness and epistemic openness are not the same thing and they need not be complementary. Particularly under less-than-ideal conditions, where strategic tactics may need to be employed to manage defects in the circumstances of argumentation, a tension may arise between these two kinds of openness. How argumentative tactics and procedures manage that tension may be one of the important issues to consider when evaluating real-life arguments or when designing argumentative discourse to function in the real world.

REFERENCES
Eemeren, F. H. van, & Grootendorst, R. (1992). Argumentation, communication, and fallacies. Hillsdale, NJ: Erlbaum.
Fleming, D. (1996). Can pictures be arguments. Argumentation and Advocacy, 33, 11-22.
Goodnight, G. T. (1982). The personal, technical, and public spheres of argument: A speculative inquiry into the art of public deliberation. Journal of the American Forensic Association , 18, 214-227.
Goodnight, G. T., & Hingstman, D. (1997). Studies in the public sphere. Quarterly Journal of Speech, 83, 351-370.
Habermas, J. (1989). The structural transformation of the public sphere. Cambridge, MA: MIT Press.
Jacobs, S. (2000). Rhetoric and dialectic from the standpoint of normative pragmatics. Argumentation, 14, 261-286.
Johnson, R. H., & Blair, J. A. (1994). Logical self-defense (1st U.S. ed.). New York: McGraw-Hill.
Schudson, M. (1997). Why conversation is not the soul of democracy. Critical Studies in Mass Communication, 14, 297-309.
Shelly, C. (1996). Rhetorical and demonstrative modes of visual argument: Looking at images of human evolution. Argumentation and Advocacy, 33, 53-68.
Walton, D. N. (1996). Argumentation schemes for presumptive reasoning. Mahwah, NJ: Erlbaum.
Willard, C. A. (1989). A theory of argument. Tuscaloosa: Univ. of Alabama.