ISSA Proceedings 2002 – Expert Advice And Discourse Coupling: Context-Dependent Valdation Of Model-Based Reasoning

logo  2002-1Abstract
Expert judgements often involve a coupling of different discourses, in the sense that conclusions from one discourse are transferred to another. Results from one scientific field are brought together with results from other scientific fields, and are applied to yet another field, namely that of a practical problem at hand.
As far as significant uncertainties are involved (as is almost always the case in practical problem solving), the validation within these different discourses may be very different. Sciences differ in the way claims are validated. Even much more significant differences are involved in the transfer to practical problem solving, since accepting or rejecting assumptions depends upon the consequences of whether these assumptions will later turn out to obtain or not.
I propose to explain some very common patterns of incomplete or fallacious reasoning in expert advice, patterns that involve implicit shifts of the burden of proof, as failures to notice these differences in validation context. Furthermore, I suggest that by taking into account the possible consequences of making a certain assumption (and also the evaluation of those consequences) the quality of discussions involving expert advice can be considerably improved.

1. What is so special about expert advice?
Expert advice plays a prominent role in contemporary (western) societies. Consultation of experts has become custom for almost any significant decision beyond the personal sphere (and even in the personal sphere a host of counselors is ready to offer its services). It has been known for a long time that this dependency raises a number of questions (Benveniste,1972; Fischer,1990). Is expert advice always directed at the common good? Have experts not become an elite that has taken over much of the effective decision making power from those who should legitimately make the decisions? Has the involvement of experts not resulted in a bias towards technocracy and reductionism? Has it not reinforced forms of bureaucracy?
From the point of view of argumentation studies, involvement of expert advice also introduces specific problems. A non-expert appealing to expert opinion cannot take full responsibility for its adequacy. The non-expert is principally incapable to check every link in the expert’s reasoning chain. This “black box” aspect implies a quality control problem: on what grounds can the non-expert assume that the expert’s opinion can be trusted? As far as the matter is beyond the arguer’s cognitive competence, the non-expert arguer has to resort to some kind of source credibility argument. And this directly leads back to the general questions concerning expert advice mentioned before.
These questions concerning the reliability of expert advice have become increasingly pressing since it became clear that the quality of expert advice is not only threatened by simple inaccuracy on behalf of the expert, but also by the structures of power and influence in which the advisory process is embedded. Scandals of biased, partisan or even outright corrupted expertise seem to become more and more prominent (Rampton,Stauber,2001).

Whereas the intricate implications that the inherent asymmetry between expert and non-expert has for argumentation appealing to expert opinion have been extensively dealt with by Walton (1997), in this paper my primary focus will be on a different aspect (that will turn out to be strongly related to the issue of asymmetry and quality control), namely the fact that scientific expert advice usually involves the coupling of different discourses. In the first place, practical problems for which expert advice is sought often involve the domains of various forms of expertise. In drawing conclusions for actual problems, results from these different fields of expertise will have to be combined. Second, applying scientific results to a practical situation means that results from the discourse of one or more scientific fields have to be transferred to a different context, namely that of the practical problem at hand. As we will later analyse in more detail, the validation criteria in these different domains will in general not coincide (cf. Birrer,2000). This means that translation steps are necessary. Unfortunately, differences in validation context are often overlooked. They tend to slip unnoticed through the loopholes of intransparency due to the asymmetry between expert and non-expert; and they are further reinforced by the persistent preference of many scientists for universalism and by their fear of relativism. One of my objectives in this article is to demonstrate that it is possible to give up this simplistic form of universalism without falling into the trap of extreme relativism. I also intend to show how these differences in validation context in principle could be accounted for, and how this account provides a systematic way to examine these differences in order to improve the quality of argumentation involving expert opinion. At the same time, quality control and quality improvement  of argumentation cannot be enforced by fixed formal rules only; it needs some open ended feedback loops of non-formalized human judgement as checks and balances as well. The latter could be an interesting breeding ground for sociological inputs in argumentation studies.

My main example in this paper is drawn from the use of mathematical models, as a more or less paradigmatic case of modern scientific expertise. Models represent an abstraction from reality or experience to some kind of formal structure, a device that makes it possible to draw some new (yet unobserved) conclusions about that reality or experience. It is this use of abstraction that presents the crucial argumentative step. In the following, I will be talking mainly about empirical science using mathematical models; many of my conclusions, however, hold for any case of formal conceptualisation, mathematical or otherwise.

2. Validation under uncertainty and the coupling of discourses
Though science has answered quite a number of questions in a more or less definitive way, we are still facing many practical questions that science cannot tell us the answer for with a considerable degree of certainty. These are of course precisely the ones that are most debated, and therefor most relevant to argumentation studies. In most policy areas, like environmental issues such as greenhouse gasses, or social policy, issues tend to revolve around cause and effect relations that cannot be predicted with high confidence (and that often even cannot be established post hoc).
Fundamental uncertainty (i.e., uncertainty that is not due to a phenomenon with a known probability distribution) is in itself by no means an uncommon phenomenon in science. As long as a certain question is not yet definitively resolved, various hypotheses and explanations usually circulate, and only continued research will possibly one day provide us with a final answer. An individual researcher is free to favour one particular explanation (in fact, in designing  experiments one has to focus one’s effort, usually on the hypothesis one thinks most likely to be true). And though the stakes in making the right guess may be enough to arouse some passion,  they are limited to intangible awards such as honour and prestige; no lives are in danger, nobody will get physically hurt. In real life decision making, this is all very different. Issues of health policy or environment may indeed affect the lives of many people in a radical way. The costs and rewards are not, as in science, simply institutionally defined, they are coming to us from the real world. They may also be not matter of individual choice; some decisions have to be made collectively, and therefore the consequences of that decision have to be somehow acceptable to the collective. In real world problems, whether we want to act upon an uncertain assumption or not is very much dependent upon the consequences that it would have when that assumption later turns out to fail, as well as the consequences when it turns out to hold. When the consequences of failure would be very bad, we will be less inclined to accept that assumption as valid; we may not even be prepared to accept the slightest chance of failure, even if there are considerable benefits in case it holds. On the other hand, when the consequences of failure are insignificant, the benefits when the assumption holds might lead us to accept that assumption.
The main thesis that I want to propose in this paper is that the acceptability of judgements under uncertainty is much dependent upon the consequences that can be expected when such judgements later turn out to be right or wrong, and upon the normative evaluation of those consequences. This dependence of ‘truth’ under uncertainty upon consequences runs against the intuition of most scientists. They tend to believe in universality: a statement is true or not, irrespective of the consequences. There is nothing wrong with this point of view as long as no uncertainty is involved; but when significant uncertainties are involved, and the consequences of a failing hypothesis are considerable, this perspective becomes entirely inadequate. Nevertheless, this dependency on consequences is often completely ignored. Assumptions that are acceptable in one discourse are thoughtlessly transferred to another discourse without a proper re-validation according to the consequences that prevail in that new context. Many fallacies involving scientific expertise can be analysed as due to disregard of differences in validation context.
My hypothesis (that I will illustrate in this article) is that significant differences of expert opinion often (if not always) can be reconstructed in terms of either different consequences being considered, or different normative evaluation of those consequences (or both). If this hypothesis is true, then differences of opinion can be explained without taking recourse to extreme relativism as regards to ‘facts’.

3. Case: the ‘limits to growth’ report
The ‘Club of Rome’ was a group of industrials and intellectuals formed in the late 60’s, and concerned about global world problems. They were interested in the use of computer models to investigate such problems at a world scale, and the relations between various types of problems and domains, such as economy, population growth and pollution. Jay Forrester (who had already established some fame with integrated computer models of complex phenomena such as urbanisation) made a first draft of a model, which was then elaborated by a team headed by Dennis Meadows. In 1972 the team produced a report which was published in many countries all over the world (Meadows et al.,1972). The conclusion of the report, based on model studies, was that shortly after 2000 big crises would occur in several parts of the world with respect to issues like pollution and food supply.
Though the warnings for disaster and the summons for reflection met approval from various sides, there was also criticism with regard to the methodological basis. E.g., it was pointed out that many parts of the model lacked data for sufficient testing, and  included insufficiently supported assumptions, and that certain aggregations led to serious misrepresentation. The most elaborate instance of such critique came from the Science Policy Research Unit of Sussex University, who made a detailed analysis of various parts of the model, by specialists in the field. In the book that collected these analyses (Cole et al.,1973) the editors also included a reply by the Meadows group (Meadows et al.,1973). It is this reply that I want to focus on in my analysis.

In this reply, some crucial lines of reasoning can be identified are the following:
1. Decisions on the basis on an explicit model are better than intuitive decisions[i]

2. If we use a model, we use the best model[ii].
3. Those who want to criticize a model should propose a better one[iii].

A lot can be said about these premises. For instance, what is meant a ‘better’ or the ‘best’ model[iv]? In this article my main focus will be on (2).
Let us for a moment accept the authors’ assumption that it can be decided which of the available models is the ‘best’. Yet, given the very high complexity of the modeling area and the state of the modeling art, this ‘best’ available model will be very remote from a faultless description of reality, and its predictions will be far from reliable. Other models and outcomes may be slightly more unlikely, but they can certainly not be ruled out as insignificant. In science, one could imagine that a scientist would decide to  explore and elaborate the most promising model first, and for the time being ignore the other possibilities. For real life decisions, on the other hand, the situation is very different. Outcomes other than the ones predicted by the ‘best’ model should certainly be taken into account as well. In fact, a decision maker has to consider all possible outcomes (and the estimated likelihood of each of them). Basing strategies on the most likely scenario only, thereby ignoring all other possibilities even if their likelihood is only slightly smaller, would be highly irresponsible. It is precisely the conflation of these two very different contexts that can make (2) look very plausible or even obvious at first glance, whereas second thought reveals its fallacious character.

4. Further analysis
The line of reasoning presented above is actually very common as a defense of models and modeling results. It can be seen as a form of reversing the burden of proof (cf. also my remarks in footnote nr. 4). Whereas one might argue that a rule saying that one should not criticize a theory unless one has a better one is unreasonable already within the discourse of science, it would definitely be misguided to base real life decisions on only one possible scenario among many others. Complex modeling such as used in the ‘Limits to growth’ study involves many and high uncertainties. As mentioned before, there may be insufficient data for testing, and aggregation may lead to misrepresentation. Usually, there are only highly imperfect models available. When models from different domains, such as economy and the natural sciences, are coupled, the combined result cannot be attributed to one particular approach or theory anymore; this makes their validation even more difficult.  With the knowledge of today we might even add that nonlinearity may generate system behaviour that is highly unpredictable, and that nonlinear models are notoriously hard to test. Under such conditions, there is a great danger that all kinds of implicit assumptions of the modelers creep in, untracked in the complexity of the modeling process. As a matter of fact, it was shown several years later by Thissen (1978) that the complex model of the ‘Limits to growth’ study could be simulated with a very simple model with only a few equations and variables. Many variables and equations in the original model turned out to be redundant in the sense that they did not affect the outcomes in any significant way at all. The crises that the model predicted simply originated from the fact that certain variables were assumed to grow exponentially, and would necessarily hit some also assumed ceilings. The main issue in the context of this paper is not whether these assumptions were reasonable or not, the point is that the crucial role of these assumptions in arriving at the conclusions was not clear. The conclusions seemed to derive as apodeictic outcomes from a big impressive computer model. Stories like these are not uncommon in complex modeling, see for another example the discussion of the IIASA energy model in (Keepin, Wynne, et .al., 1984).

One might ask whether the argument by Meadows c.s. does not rest on an implicit appeal to what today we would call the precautionary principle: if we have indications that we might be entering a scenario where something goes seriously wrong, we should take preventive action, even if the evidence presently available does not yet give us a final proof that it will actually happen. The precautionary principle today plays an important role in issues such as the greenhouse effect and many others. However, it turns out that similar shifts of the burden of proof as shown above also occur in the reverse direction, that is, running counter to the precautionary principle. In issues such as the risks posed by applications of genetic modification, one can often observe the defense that those risks have not yet been observed, and therefore cannot be assumed to exist. Though the lack of concrete observations is not very surprising for such a very new technology, and do not seem a particularly strong argument for ruling out the possibility of risk, proponents of the application of these new technologies often treat the issue of risk as a scientist would treat someone who says that unicorns exist. In the case of unicorns, the scientist might say: then bring me a unicorn, I will examine it to see whether it is not a fake, and if it is real I will believe you. Some arguments on risk seem to follow the same pattern: risks can be said to exist (and legitimate to take into account) only if there have already been observations that that risk has materialised, or at least observations of mechanisms that directly imply the existence of such risks (cf. Birrer,Pranger,1995). All these instances of (failing) argumentation can be explained in the very same way: in science, one is used to make uncertain assumptions into preliminary hypotheses, and one can afford to do so because the consequences of the assumption later turning out to be wrong would not be too dramatic; this habit is then thoughtlessly transferred to the discourse of practice, where these consequences are very different. It is the very common belief among scientists in universal truth that makes them prone to this fallacy (aided, no doubt, by a certain amount of wishful thinking and by the desire to get to the conclusion that is already prefered for other reasons).

5. Conclusions
It is hard to provide, or even imagine, an incontestable proof that the explanation that I put forward in terms of the discourse coupling fallacy is correct. It would take numerous interventions of asking whether arguers were in fact applying that particular reasoning scheme. But even that would not constitute a real proof. Some subjects may not want to admit that they did use the scheme I suggested, or even that their argument is fallacious. Or they might simply not be aware themselves which particular scheme of reasoning they were using to fill the gap between arguments and conclusions. Similarly, when arguers actually would recognize the discourse coupling fallacy as a scheme they were using, we would still not be entirely sure whether their perception of their own reasoning process is correct either.

The use of identifying the discourse coupling fallacy, and of the hypothesis that differences in conflicting expert opinions can to a significant extent be explained from differences in the consequences taken into account, and/or from differences in the normative evaluation of those consequences, rather seems to me to lie in that it could be of more practical help: discussions involving expert advice might be lifted to a more fruitful exchange when the discussants (problem holders as well as advisers) would be asked to specify the consequences that they are taking into account, and the way they evaluate those consequences. It can add to the quality control that is so badly needed in the face of the problem of the asymmetry between expert and non-expert. The approach has the advantage that it does not on beforehand cast divergent expert opinion in terms of extreme relativism and absolute incommensurability: How far the approach that I propose would bring us can only be found out in practice.

NOTES
[i] ‘We suggest that our theories appear to be more comprehensive and more objective than the mental models of long term population and economic processes which currently guide the formulation of social policy.’ (Meadows et al., 1973: 221)
[ii] Our primary concern, however, is that the best possible models available be criticized, revised, and used, so the quality of social decisions can progress with the quality of our models.’ (Meadows et al., 1973: 238; emphasis by the authors)
[iii] ‘The Sussex critics point to the unsatisfactory nature of the data underlying the World models. They do not point out where better information can be found; in fact they generally admit that it cannot be found. They point to assumptions in the model that are imperfect; they seldom suggest how more perfect alternatives might be developed. (…) They disagree with the conclusions we have derived from our models, but they do not put forward an alternative model in which they have more confidence. They complain that system dynamics is not a perfect methodology, but they do not suggest a better one.’ (Meadows et al., 1973: 221)
[iv] A partly answer can be found in the quotation in footnote three, but it seems to come close to a reversal of the burden of proof. Moreover, if the critic should come with a model that is at least as comprehensive, only those critics are allowed to enter the arena who have enough resources to match such a laborious effort.

REFERENCES
Benveniste, Guy (1972). The politics of expertise. Glendessary Press, Berkeley
Birrer, Frans A.J. (2000). Contextual validation of model-based conclusions, in Jörg Blasius et al. (eds.): Social science methodology in the new millennium. Proceedings of the Fifth International Conference on Logic and Methodology, 2000, Cologne
Birrer, Frans A.J., Rob Pranger (1995). Complex intertwinements in argumentation: Some cases from discussions on biotechnology and their implications for argumentation studies. In: Frans H. van Eemeren, Rob Grootendorst, J. Anthony Blair, Charles A. Willard (Eds.), Special fields and cases, Volume IV of the Proceedings of the Third ISSA Conference on Argumentation. Amsterdam: SicSat/International Centre for the Study of Argumentation
Cole, Hugh Samuel David, Christopher Freeman, Marie Jahoda, K.L.R. Pavitt (eds.) (1973). Models of doom. A critique of The limits to growth. New York: Universe Books
Fischer, Frank (1990). Technocracy and the politics of expertise. Sage, Newbury Park
Keepin, Bill, Brian Wynne (1984). Technical Analysis of the IIASA Energy Scenarios. Nature 312: 691-695
Meadows, Donella H., Dennis L. Meadows, Jørgen Randers, William W. Behrens III (1972). The limits to growth. Report to the Club of Rome’s project on the predicament of mankind. New York: New American Library
Meadows, Donella H., Dennis L. Meadows, Jørgen Randers, William W. Behrens III (1973). A response to Sussex, in (Cole et al.,1973)
Rampton, Sheldon, John Stauber (2001). Trust us, we’re the experts. How industry manipulates science and gambles with our future. Tauber/Putnam (Penguin), New York
Willem A.H. Thissen (1978). Investigation into the Club of Rome’s world 3 model: lessons for understanding complicated models. Eindhoven: Eindhoven Technical University (dissertation)
Walton, Douglas (1997). Appeal to expert opinion. Arguments from authority. Pennsylvania State University Press, Pennsylvania