ISSA Proceedings 1998 – Truth and Argument

No comments yet

ISSAlogo1998Truth is deeply complicit in argument wherever logic is, for independent of the purposes of different argument kinds, in so far as they use standard logic they are compelled by its underlying theory of truth. And the notions of truth underlying the two giant contributions in the history of logic: that of Aristotle, and that of the logicians preoccupied with the foundations of mathematics in the early twentieth century – show deep theoretical and even metaphysical assumptions that make them suspect as the underlying theory of a logic adequate to support the theory of argument as currently construed. That is, argument seen as the rational core of ordinary and specialized discourse of the widest variety of sorts. Such a theory of argument with a clear empirical and practical component cannot assume the usefulness of underlying images of logic drawn from rather different conceptions of how reason manifests itself in discourse.
First: as to the problems with the logical core James Herman Randall, in his classic exposition of Aristotle, offers a complex view of the relationship between truth, logic and inquiry. The to dioti – the why of things, connects apparent truths, the peri ho, with explanatory frameworks, through the archai of demonstration, that serve as ta prota, the first things – a true foundation for apparent truths. Although Aristotle was more ‘post modern’ then many of those that work in his tradition, the archai after all were subject matter specific, the envisioning of archai readily knowable if not known, reflected a classic and overarching optimism about knowledge. This enabled Aristotle to graft a determinate logic onto the various indeterminancies inherent in much of inquiry.
Logic is central in dialogue as well: to dialegesthai, the premise seeking activity that seeks to identify the appropriate archai of kinds of things. The theory of the syllogism, along with eristics, offers the basic tools of the logikos or dialectikos, one who thinks and questions.

When all works well, the result is the demonstrative syllogism, apodeixis which shows the necessity of a that, a hoti, in light of the dioti, the cause, in relation to the archai. From whence the archai? Quoting Randall, by “”experience” of facts, by repeated observations, we become aware of the archai, the universal that is implicit in them.” Citing Aristotle: “When the observation of instances is often repeated, the universal that is there becomes plain” pp. 42-3. Such a crude inductivist epistemologically has little appeal to moderns  and offers little danger for modern views of inquiry, but Aristotle’s logic, remains within the normative core. That is perhaps even worse for understanding inquiry, for unlike the crude inductivism which is quickly seen as too crude, his logic has both necessity and inherent plausibility. The result: the basic truth structure of his logic has been built into the normative structure of reasoning from his time till now.
The problem is how to distinguish the archai from among endoxa, the merely accepted opinions prevalent at the time. Again Randall “It is nous, working with and in the midst of facts, working in the subject matter itself, that ”sees” the truth of the archai“ p. 44. Not in Platonic isolation, to be sure, but in the context of subject matter. But still, this noetic ‘recognizing’ shares with Plato’s view a phenomenological (Randall calls it ‘psychological’ (ibid.)), rather than a logical account of what it means to come to see the truth of archai.
Even given the primitive necessity of noetic recognition of archai, the archai must still prove their logical worth by being the framework within a subject matter becomes truly known. Archai yield the conceptual structures that is determined by syllogistic reasoning from them to consequences. As Randall puts it. ‘”Science” episteme, is systematized “formalized” reasoning; it is demonstration, apodexsis, from archai … [it] operates through language, logos; through using language, logismos, in a certain connected fashion, through syllogismos‘ p. 46. Syllogismos points back to the  basic constraint on nous that it see beyond the accidental and the particular, that it deal with the essential the ti esti, and so syllogism deals with what all of a kind have in common.
Syllogistic reasoning within episteme deduces the particular from what all particulars of the kind have in common, and in dialectic looks at the proposed archai or endoxa, through the strongest possible lens – counter examples as understood in the traditional sense of strict contradictories, systematized, then canonized as the square of opposition.
All of this is so familiar that it seems hardly worth recounting, but without the deep conceptual understanding of the context, the problem with syllogism, and particularly with the theory of truth that underlies the practice of offering counter-examples, the issue will not be clear.

The focus on episteme, on theoria places the bar high for those who would propose archai. The ‘inductive’ epistemology of concept formation along with the noetic interpretation of their apperception presupposes that human beings can know reality with an immediacy that seem silly given the course of scientific discovery over the past several centuries. Too much conceptual water has gone under the bridge to think that concepts are to be seen clearly within percepts. Rather, the conceptual frameworks that human beings have elaborated, modified and discarded have been multifarious and extend far beyond the imaginative capabilities of Aristotelian views that take the perceptually presented as representative of underlying realities. Once the enormous difficulty of the task of finding the conceptual apparatus that will undergird a true picture of reality is realized, Aristotle’s demand that concepts hold true without exception becomes a serious drag on inquiry. Yet it still prevails, built into the very meaning of logic as used.
Why this is so, is in part because of the power of the next major advance in logical theory. Syllogism, the only completed science as late as Kant, took on a new life when the issues of the foundations of mathematics became the central concern of theorists. The historical connection is not hard to trace; for from Plato on mathematics was seen as the prototype of knowledge, and its truths a model for the outcome of inquiry. Galileo and Newton linked mathematics to science and so it is no surprise that the logical model, based on the needs of mathematics retained its grasp on theorists of science as recently as logical empiricism. But there is more to that story, for the enormous advances of the twentieth century took the rudimentary mathematization of syllogism by Boole and others, to a theory whose major achievement: completeness, became a model for both what logic is and how it should be understood.

The magnificent achievement of Russell and Tarski offered a model for understanding logical inference and offers an elaboratable structure – quantification theory, that congruent with much of syllogism, offered a clarity of understanding that surpassed anything dreamt of by centuries of logicians. The Aristotelian core remained, now rethought in terms of extensional interpretations of function symbols that offered a new grounding for the all or nothing account of argument built into the square of opposition. The Boolian interpretation of Aristotle’s quantifiers retained the high demand that universal claims are to be rejected in light of a single counter-instance, as did the modern semantics of models within which a natural theory of truth was to be found. Mathematizing the clear intuition of correspondence, Tarski’s theory of truth gives the stability needed to yield vast areas of mathematics and even offered some precious, but few, axiomatizations of physical theory. The price was that the truth was relativized to models, yet there was no reason to think that any of the models in use in science were true. This remark requires clarification.
Since the optimistic days in Greece when the early meta- analysis was innocent of many real examples, the claim that archai are “noused” from particulars with ease seems a historical curiosity, irrelevant to human inquiry. For the history of human inquiry in the sciences, contrary to Aristotle, showed that the identification of archai is no easy thing. Rather centuries of scientific advance have shown the utility of all sorts of truish or even down-right false models of phenomena. Concepts, and the laws, generalizations, principles and etc. that cashed them out into claims, have shown themselves to be mere approximations to a receding reality. As deeper elaborations of connections among concepts, and underlying explanatory frames, have characterized successful inquiry, truth in any absolute sense becomes less of an issue. The issue is, rather, likelihoods, theoretic fecundity, interesting plausibility and etc. The operational concepts behind these: confirmation and disconfirmation, however, in the once standard philosophical reading (Hempel and the rest) retained the absolutist core that Aristotelian logic exemplifies – amplified by quantification theory. Even Popper saw falsification as instance disconfirmation.
Much work since then has offered a more textured view; I think here of Lakatos and Laudan. Students of science no longer see the choice as between deductivism as standardly construed as an account for scientific explanation and some Feyerabendian a-logical procedure that disregards truth. Students of science see, rather, a more nuanced relation between theory building and modification. Argument theorists and informal logicians should be thrilled at this result for it opens the door for what they do best: the analysis of complex arguments. But not if they are crippled by the very logic that has dominated the discussion so far.

Truth, one of the key meta-theoretical underpinnings of logic – along with entailment and relevance – looks rather different when we move from traditional accounts to scientific practice. Let’s take an example.
Second: a constructive theory of truth
If you ask a sane moderately informed person what the world is really made of in just the general sense that Greeks might have asked, the answer is something like “atoms.” Let’s start there. At the core of modern science stands the Periodic Table. I take as an assumption that if anything is worth considering true of all of the panoply of modern understanding of the physical world it is that. But why? And what will learn by changing the paradigm?

The periodic Table stands at the center of an amazingly complex joining of theories at levels of analysis from the most ordinary chemical formula in application to industrial needs, to the most recondite – particle physics. The range of these ordinary things – electrical appliances to bridges, has been interpreted in sequences of models, developed over time, each of these responding to a particular need or area of scientific research. Examples are no more than a listing of scientific understanding of various sorts: the understanding of dyes that prompted organic chemistry in Germany in the late 19th century; the smelting of metals and the improvement of metal kinds, e.g. steel; the work of Farraday in early electric theory; the the development of the transistors and the exploration of semi-conductors. This multitude of specific projects, all linked empirically to clear operational concepts, has been unified around two massive theoretic complexes: particle physics and electromagnetic wave theory. The deep work in science is to unify theories. The mundane work in science is to clarify and extend each of the various applications and clarify and modify existing empirical laws, and this in two fashions: 1) by offering better interpretations of empirical and practical understanding as the underlying theories of their structure becomes clearer. 2) By strengthening connections between underlying theories so as to move towards a more coherent and comprehensive image of physical reality, as underlying theories are modified and changed. On my reading of physical chemistry the Periodic Table is the lynch pin, in that is gives us, back to Aristotle again, the basic physical kinds.
We need a theory of truth that will support this. And, surprisingly perhaps, I think the image is just what current argumentation theorists need as well. Since argument is not frozen logical relations but interactive and ongoing, we need a logic that supports dialectical advance. That is, we need a dynamics of change rather than a statics of proof. We need to see how we reason across different families of considerations, different lines of argument, that add plausibility, and affect likelihoods. Arguments are structured arrays of reasons brought forward; that is, argument pervades across an indefinite range of claims and counter-claims. These claims are complex and weigh differently as considerations, depending on how the argument moves. So we need a notion of truth that connects bundles of concerns – lines of argument, and to different degrees.

Back to quantification theory. Quantification theory was developed in order to solve deep problems in the foundations of mathematics. And the standard interpretation of mathematics in arithmetic models proved to be a snare. What was provable is that any theory that had a model, had one in the integers, and models in arithmetic became the source for the deepest work in quantification theory (Godel, most obviously). But the naturalness, even ubiquity of a particular model kind did not alter that fact that truth in a model could only be identified with truth when a model of ontological significance was preferred. This seems to have escaped Tarski’s followers who spent little effort in exploring the difference. Now, truth in a model is an essential concept. Without it we have no logic. But the identification of truth in a model with truth just reflects the metaphysical and epistemological biases of the tradition with the univocal character of mathematics as it was understood then. If I am right, it is not truth in a model that is that central issue for truth, but rather the choice of models that represent realities. And this cannot be identified with truth in a model for it requires that models be compared.
To look at it another way, if we replace mathematics with science as the central paradigm from which a logical theory of truth is to be drawn, the identification of truth with truth in a model is severed. For there is no model in which scientific theories are proved true. Rather science shows interlocking models connected in weird and wonderful ways. The reduction rules between theories are enormously difficult to find and invariably include all sorts of assumptions not tied to the reduced theory itself. The classic example is the reduction of the gas laws to statistical mechanics. The assumption of equiprobability in regions is just silly as an assumption about real gases, but the assumption permits inferences to be drawn that explain the behavior of gases in a deeply mathematical way, and in a way that gets connected to the developing atomic theory at the time, much to the advantage of theoretical understanding and practical application.

What are the lessons for the theory of truth? We need to get rid of the univocal image of truth – that is truth within a model, and replace it with the flexibility that modalities both require and support, that is truth across models. We need the metatheoretic subtlety to give mathematical content to likelihoods and plausibilities, a theory of the logic of argument must address the range of moves that ordinary discourse permits as we qualify and modify in light of countervailing considerations. These can not be squeezed into the Procrustean Bed of all or nothing construals of logical reasoning. Formal logic has been captured by Tarski semantics. It offers a clear analogue to the notion of correspondence, but at an enormous price. The power of Tarski semantics – the yield being completeness, that is all formally valid proofs yield logical true conditionals – requires that the models be extensional, that is, all function symbols in the formal language are definable in terms of regular sets, that is sets closed under the standard operations of set theory, and definable completely in terms of their extensions.
The problem, of course, is that the overwhelming majority of both ordinary and theoretic terms have no obvious extensional definition, and the most interesting functional concepts are intentional (causation, in all of its varieties). The clue is the formal solution to modalities (necessity, possibility, and variants such as physical possibility): that is relationships among worlds as in Kripke semantics. This moves the focus from truth within models, extensionally defined – to relationships among selected worlds. Such relationships may vary widely, each one specific to a relationship, as in the analysis of physical causality in terms of a function that maps onto physically possible worlds (worlds consistent with relevant aspects of physical theory). Little can be said about the general restrictions on mappings across worlds, for inter-world relationships, if we take the intuition behind the account of physical causality, are broadly empirico-historical. That is, what makes a world physically possible is relative to that laws of physics interpreted as restrictions on functions across worlds.
The lack of a logical decision procedure – a consequence of the inter-model relations being empirical in the world-historical sense, need not make us despair as to a solution to the problem of truth in principle. For although essential details of the model require an empirico-historical investigation of concepts in use — the functional relations that are concretized in warrrants that support entailments and the procedures that determine the relevance of claims and counter-claims, that is, the structure of logical possibilities, can be furnished a priori.
A solution in principle becomes possible when we look beyond truth in models to truth across models. Within models something very much like the standard interpretation holds, for it enables us to refute our models as we find disconfirming instances. (I say very much like because I don’t want to rule out holding out, even within a model, against disconfirmation. But the clear case of classic contradiction is within models: think of why all men are mortal). But across models we need something very different indeed.
As mentioned, the account I offer has an affinity with Kripke’s solution to the problems of modalities. We look to functional relations across models, and the history of relations over time and in relation to their logical surround. What I will try to do is induce  you to imagine a mental model. For those interested I have some copies of a precise mathematical description. Bereft of the mathematics a mental image must suffice.

Think, if you will, of physical science as some beautiful array of tubing of different thickness and different color – the color infusing the tube – arranged vertically before you. And see them with vessels at the joins of tubes, gradually changing color. Each individual vessel, can you imagine them, changes colors as the colors from the various tubes from which it feeds alter the composition of the color in the vessel The ‘vessel’ is a complex composite function of the tubes to and from which it draws. What is this strange image I ask you to envision?
Truly, the vessels are models drawn from our scientific concepts, the most general models at the top; at the bottom models of data: observations, if you will. Although the models are connected they are individuatable. The richest space of vessels – many vessels, much changes in color, myriad connections – is in the middle of the array. I think here of systems of chemical formula; the aggregate laws of of medium level physics (rigid body dynamics, perhaps); models of DNA; computer models of weather systems and other complex phenomena – nodes in the array to which and from which connections are made. Color fields are systems of principles, laws, generalizations, and other regularities, connected by inference – functions that map models onto models. But that is to introduce the mathematics. An easier understanding is that the connecting tubes are the conduits of evidence. Confirmation from below, systematic support from above, although that is a misleading simplification since higher level theories generate new empirical support for theories they explain (reduce). The ‘colors’ change with the results of inquiry as the relationship between the various models becomes clearer, as the evidence from reducing theories and empirical confirmation alter the evidentiary weight flowing to and through the various theoretic nodes.
Truth becomes a property of the field. A few suggestions. First, the crucial empirical dimension, for this is science after all. There is a set of privileged models: empirical models of the data. What makes science empirical is a constraint that all models have connections with empirical models. Second, for models at any level short of the highest there may be found higher level models. So for first level models of the data, these data are joined through a more theoretical model. Theoretic models take their epistemic force first from the empirical models that they join, and then, and more importantly, from the additional empirical models that result from the theoretic joining in excess of the initial empirical base of the models joined.
Truthlikeness is defined in terms of considerations such as: The increase or decrease in the complexity of particular models over time. The depth with which any model is supported by other models (the height on the vertical of any set of nodes (vessels) connected by tubes) at a time, and as a function of time. The breadth, the horizontal width which a supporting model is represented in the field of lower level – more empirical – models at a time, and as a function of time. The persistence of a set across the array. In terms of the visual image: vessels whose color tends to diffuse across the system.
Gradient of color, literally in a physical or computer model of the array, is a metric across the field. Analogically, gradient of color stands for the changing weights assigned to models as they interact. The metric correlates with evidence of varying degrees of robustness flowing from different sources. Truthlikeness in complex ways becomes a function of the structure itself.
Pretty dense, but turn the image to the example. The Periodic Table, up pretty high and to the center connects with the vast domain of chemistry – physical and organic, which in association with roughly parallel theoretic clusters, mechanics – statics and dynamics, electro-magnetic wave theory – explains just about everything we do and can do in the physical world in the last century, and has increased in its explanatory power as individual theories are expanded and refined, and inter-theoretic connections made. There is logic there, dare we deny it? Students of each field learn translation procedures to and from observable phenomena – to and from related theories. The connections are often the result of higher order theories. Above the Periodic Table: particle physics, quantum theory, quantum electro-dynamics, general relativity. These are the massive contributions of 20th century physics. Do we deny that there is logic there?
By the way, there is no requirement for the the highest order models be univocal (that is the lesson of indeterminancy). Nor that all model chains (paths up the vertical) go particularly high. But since higher order theories deepen the support, we like connections and go as high as we can: the tip of the Einstein cone – TOES (theories of everything).
There is a logic, but it is not the all or nothing logic of Aristotle and mathematicians. An argument is not as weak as its weakest link, nor are really weak links much trouble at all. (Think of all of the relatively unsupported empirical phenomena that are part of science without having any clearly seen connections to theories. Nobody changes organic chemistry when the latest results on cholesterol in the diet are reported).
Each member of the array supports the others, but they hang separately. That is, particular evidentiary moves affect each model differently. In the immediate neighborhood (that is actually a technical expression, but think of the vessel image again and picture tubes that connect directly to a vessel), inquiry affects models in the most intimate way – a near relative of standard logic probably works fine here. But there are relations with other theories, consequences for related theories. How does change percolate through the system? These are questions that the shift from a mathematical to a scientific paradigm of truth affords.

There are at least two uninteresting sorts of truths: statements of the cat on the mat variety and logical truths. Everything else relies heavily on movements across inference sets. Sentences ranging from ‘the light is red’ to ‘John has pneumonia,’ in their standard occurrences, are warranted as true (or likely, or plausible, etc) because countless other statements are true (or likely or plausible, etc.). To verify each of these, or any other interesting expression, is to move across a wide range of other statements connected by underlying empirical and analytical theories (systems of meaning, generalizations etc). All of these have deep connections with observable fact, but more importantly are connected by plausible models of underlying and related mechanisms. These include all sort of functional connections that enable us to infer from evidence to conclusion, and to question, in light of apparent inconsistencies connected to indefinitely elaborate and elaboratable networks of claims and generalizations of many sorts. For most estimations of the truth of a claim offer a rough index of our evaluation of the context that stands as evidence for it. Under challenge, that body of evidence can be expanded almost indefinitely, all of this still governed by the available meaning postulates and inference tickets cited, assumed, or added as inquiry and argumentation proceed. And without a logic adequate to the understanding the give and take of counter-example and claim, argument and argumentation fall asunder.

My claim, for now three presentations at Amsterdam, is that real argument will be better understood if the best arguments was seen as the prototype – what I call argumentation in regularized discourse communities. What I have tried to show here is that looking at the these also yields a model theoretic understructure for truth in logic.

REFERENCES
Randall, J.H. (1960). Aristotle. New York: Columbia University Press.

image_pdfimage_print
Bookmark and Share

Comments

Leave a Reply





What is 20 + 14 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)
  • About

    Rozenberg Quarterly aims to be a platform for academics, scientists, journalists, authors and artists, in order to offer background information and scholarly reflections that contribute to mutual understanding and dialogue in a seemingly divided world. By offering this platform, the Quarterly wants to be part of the public debate because we believe mutual understanding and the acceptance of diversity are vital conditions for universal progress. Read more...
  • Support

    Rozenberg Quarterly does not receive subsidies or grants of any kind, which is why your financial support in maintaining, expanding and keeping the site running is always welcome. You may donate any amount you wish and all donations go toward maintaining and expanding this website.

    10 euro donation:

    20 euro donation:

    Or donate any amount you like:

    Or:
    ABN AMRO Bank
    Rozenberg Publishers
    IBAN NL65 ABNA 0566 4783 23
    BIC ABNANL2A
    reference: Rozenberg Quarterly

    If you have any questions or would like more information, please see our About page or contact us: info@rozenbergquarterly.com
  • Like us on Facebook

  • Archives