Noam Chomsky On The Evolution Of Language: A Biolinguistic Perspective

Photo: en.wikipedia.org

Photo: en.wikipedia.org

Truth-out.org ~ September 2016. Human language is crucial to the scientific quest to understand what kind of creatures we are and, thus crucial to unlocking the mysteries of human nature.

In the interview that follows, Noam Chomsky, the scholar who single-handedly revolutionized the modern field of linguistics, discusses the evolution of language and lays out the biolinguist perspective — the idea that a human being’s language represents a state of some component of the mind. This is an idea that continues to baffle many non-experts, many of whom have sought to challenge Chomsky’s theory of language without really understanding it.

Journalist and ”radical chic” reactionary writer Tom Wolfe was the latest to do so in his laughable new book, The Kingdom of Speech, which seeks to take down Charles Darwin and Noam Chomsky through sarcastic and ignorant remarks, making vitriolic attacks on their personalities and expressing a deep hatred for the Left. Indeed, this much-publicized book not only displays amazing ignorance about evolution in general and the field of linguistics in particular, but also aims to portray Noam Chomsky as evil — due to his constant and relentless exposure of the crimes of US foreign policy and other challenges to the status quo.

C. J. Polychroniou: Noam, in your recently published book with Robert C. Berwick (Why Only Us: Language and Evolution, MIT Press 2016), you address the question of the evolution of language from the perspective of language as part of the biological world. This was also the theme of your talk at an international physics conference held this month in Italy, as it seems that the scientific community appears to have a deeper appreciation and a more subtle understanding of your theory of language acquisition than most social scientists, who seem to maintain grave reservations about biology and the idea of human nature in general. Indeed, isn’t it the case that the specific ability of our species to acquire any language was a major theme of interest to the modern scientific community from the time of Galileo?

Noam Chomsky: This is quite true. At the outset of the modern scientific revolution, Galileo and the scientist-philosophers of the monastery of Port Royal issued a crucial challenge to those concerned with the nature of human language, a challenge that had only occasionally been recognized until it was taken up in the mid-20th century and became the primary concern of much of the study of language. For short, I’ll refer to it as the Galilean challenge. These great founders of modern science were awed by the fact that language permits us (in their words) to construct “from 25 or 30 sounds an infinite variety of expressions, which although not having any resemblance in themselves to that which passes through our minds, nevertheless do not fail to reveal all of the secrets of the mind, and to make intelligible to others who cannot penetrate into the mind all that we conceive and all of the diverse movements of our souls.”

We can now see that the Galilean challenge requires some qualifications, but it is very real and should, I think, be recognized as one of the deepest insights in the rich history of inquiry into language and mind in the past 2500 years.

The challenge had not been entirely ignored. For Descartes, at about the same time, the human capacity for unbounded and appropriate use of language was a primary basis for his postulation of mind as a new creative principle. In later years, there is occasional recognition that language is a creative activity that involves “infinite use of finite means,” in Wilhelm von Humboldt’s formulation and that it provides “audible signs for thought,” in the words of linguist William Dwight Whitney a century ago. There has also been awareness that these capacities are a species-property, shared by humans and unique to them — the most striking feature of this curious organism and a foundation for its remarkable achievements. But there was never much to say beyond a few phrases.

But why is it that the view of language as a species-specific capacity is not taken up until well into the 20th century?

There is a good reason why the insights languished until mid-20th century: intellectual tools were not available for even formulating the problem in a clear enough way to address it seriously. That changed thanks to the work of Alan Turing and other great mathematicians who established the general theory of computability on a firm basis, showing in particular how a finite object like the brain can generate an infinite variety of expressions. It then became possible, for the first time, to address at least part of the Galilean challenge directly — although, regrettably, the earlier history [for example, the history of Galileo’s and Descartes’ inquiries into the philosophy of language, as well as the Port-Royal Grammar by Antoine Arnauld and Claude Lancelot] was entirely unknown at the time.

With these intellectual tools available, it becomes possible to formulate what we may call the Basic Property of human language: The language faculty provides the means to construct a digitally infinite array of structured expressions, each of which has a semantic interpretation expressing a thought, and each of which can be externalized by means of some sensory modality. The infinite set of semantically interpreted objects constitutes what has sometimes been called a “language of thought”: the system of thoughts that receive linguistic expression and that enter into reflection, inference, planning and other mental processes, and when externalized, can be used for communication and other social interactions. By far, the major use of language is internal — thinking in language.

Can you please expand on the notion of the internal language?

We now know that although speech is the usual form of sensory motor externalization, it can just as well be sign or even touch, discoveries that require a slight reformulation of the Galilean challenge. A more fundamental qualification has to do with the way the challenge is formulated: in terms of production of expressions. So formulated, the challenge overlooks some basic issues. Production, like perception, accesses the internal language but cannot be identified with it. We must distinguish the internalized system of knowledge from the actions that access it. The theory of computability enables us to establish the distinction, which is an important one, familiar in other domains.

Consider, for example, human arithmetical competence. In studying it, we routinely distinguish the internal system of knowledge from the actions that access it, like multiplying numbers in our head, an action that involves many factors beyond intrinsic knowledge; memory constraints, for example. The same is true of language. Production and perception access the internal language but involve other factors as well, including again short-term memory, matters that began to be studied with some care in the early days of concern with the Galilean challenge, now reformulated to focus on the internal language, the system of knowledge that is accessed by actual production and by perception.

Does this mean that we have solved the mystery of the internal language? For example, the whole idea continues to be questioned in some quarters, although it is widely accepted, apparently, by most scientists.

There has been considerable progress in understanding the nature of the internal language, but its free creative use remains a mystery. That comes as no surprise. In a recent review of the state of the art concerning far simpler cases of voluntary action, two leading researchers, neuroscientists Emilio Bizzi and Robert Ajemian, write that we are beginning to learn something about the puppet and the strings, but the puppeteer remains shrouded in mystery. That is even more dramatically true for such creative acts as the normal [everyday] use of language, the unique human capacity that so impressed the founders of modern science.

In formulating the Basic Property, we are assuming that the faculty of language is shared among humans. That seems solidly established. There are no known group differences in language capacity, and individual variation is found only at the margins. More generally, genetic variation among humans is quite slight, not too surprisingly, given the recency of common origins.

The fundamental task of inquiry into language is to determine the nature of the Basic Property — the genetic endowment that underlies the faculty of language. To the extent that its properties are understood, we can seek to investigate particular internal languages, each an instantiation of the Basic Property, much as each individual visual system is an instantiation of the human faculty of vision. We can investigate how the internal languages are acquired and used, how the language faculty itself evolved, its basis in human genetics and the ways it functions in the human brain. This general program of research has been called the Biolinguistic Program. The theory of the genetically-based language faculty is called Universal Grammar; the theory of each individual language is called its Generative Grammar.

But languages vary greatly from one another, so what’s the link between Generative Grammar and Universal Grammar?

Languages appear to be extremely complex, varying radically from one another. And indeed, a standard belief among professional linguists 60 years ago was that languages can vary in arbitrary ways and each must be studied without preconceptions. Similar views were held at the time about organisms generally. Many biologists would have agreed with molecular biologist Gunther Stent’s conclusion that the variability of organisms is so free as to constitute “a near infinitude of particulars which have to be sorted out case by case.” When understanding is thin, we expect to see extreme variety and complexity.

However, a great deal has been learned since then. Within biology, it is now recognized that the variety of life forms is very limited, so much so that the hypothesis of a “universal genome” has been seriously advanced. My own feeling is that linguistics has undergone a similar development, and I will keep here to that strand in contemporary study of language.

The Basic Property takes language to be a computational system, which we therefore expect to observe general conditions on computational efficiency. A computational system consists of a set of atomic elements and rules to construct more complex ones. For generation of the language of thought, the atomic elements are word-like, though not words; for each language, the set of these elements is its lexicon. The lexical items are commonly regarded as cultural products, varying widely with experience and linked to extra-mental entities [objects entirely outside of our minds, such as the tree outside the window] — an assumption expressed in the titles of standard works, such as W.V. Quine’s influential study Word and Object. Closer examination reveals a very different picture, one that poses many mysteries. Let’s put that aside for now, turning to the computational procedure.

Clearly, we will seek the simplest computational procedure consistent with the data of language, for reasons that are implicit in the basic goals of scientific inquiry. It has long been recognized that simplicity of theory translates directly to explanatory depth. A more specific version of this quest for understanding was provided by a famous dictum of Galileo’s, which has guided the sciences since their modern origins: nature is simple, and it is the task of the scientist to demonstrate this, from the motion of the planets, to an eagle’s flight, to the inner workings of a cell, to the growth of language in the mind of a child. Linguistics has an additional motive of its own for seeking the simplest theory: it must face the problem of evolvability. Not a great deal is known about evolution of modern humans, but the few facts that are well established, and others that have recently been coming to light, are rather suggestive and conform well to the conclusion that the language faculty is near optimal for a computational system, the goal we should seek on purely methodological grounds.

Did language exist before the emergence of Homo Sapiens?

One fact that does appear to be well established is, as I have already mentioned, that the faculty of language is a true species property, invariant among human groups — and furthermore, unique to humans in its essential properties. It follows that there has been little or no evolution of the faculty since human groups separated from one another. Recent genomic studies place this date not very long after the appearance of anatomically modern humans about 200,000 years ago, perhaps some 50,000 years later, when the San group in Africa separated from other humans. There is some evidence that it might have been even earlier. There is no evidence of anything like human language, or symbolic activities altogether, before the emergence of modern humans, Homo Sapiens Sapiens. That leads us to expect that the faculty of language emerged along with modern humans or not long after — a very brief moment in evolutionary time. It follows, then, that the Basic Property should indeed be very simple. The conclusion conforms to what has been discovered in recent years about the nature of language — a welcome convergence.

The discoveries about early separation of the San people are highly suggestive … [they] have significantly different externalized languages. With irrelevant exceptions, their languages are all and only the languages with phonetic clicks, with corresponding adaptations in the vocal tract. The most likely explanation for these facts, developed in detail in current work by Dutch linguist Riny Huijbregts, is that possession of internal language preceded separation, which in turn preceded externalization, the latter in somewhat different ways in separated groups. Externalization seems to be associated with the first signs of symbolic behavior in the archaeological record, after the separation. Putting these observations together, it seems that we are reaching a stage in understanding where the account of evolution of language can perhaps be fleshed out in ways that were unimaginable until quite recently.

When do universal properties of language come to light?

Universal properties of the language faculty began to come to light as soon as serious efforts were undertaken to construct generative grammars, including quite simple ones that had never been noticed, and that are quite puzzling — a phenomenon familiar in the history of the natural sciences. One such property is structure-dependence: the rules that yield the language of thought attend solely to structural properties, ignoring properties of the externalized signal, even such simple properties as linear order.

To illustrate, consider the sentence birds that fly instinctively swim. It is ambiguous: the adverb “instinctively” can be associated with the preceding verb (fly instinctively) or the following one (instinctively swim). Suppose now that we extract the adverb from the sentence, forming instinctively, birds that fly swim. Now the ambiguity is resolved: The adverb is construed only with the linearly more remote but structurally closer verb swim, not the linearly closer but structurally more remote verb fly. The only possible interpretation — birds swim — is the unnatural one, but that doesn’t matter: the rules apply rigidly, independent of meaning and fact. What is puzzling is that the rules ignore the simple computation of linear distance and keep to the far more complex computation of structural distance.

The property of structure dependence holds for all constructions in all languages, and it is indeed puzzling. Furthermore, it is known without relevant evidence, as is evident in cases like the one I just gave and innumerable others. Experiment shows that children understand that rules are structure-dependent as early as they can be tested, by about age 3, and do not make errors — and are, of course, not instructed. We can be quite confident, then, that structure-dependence follows from principles of universal grammar that are deeply rooted in the human language faculty. There is evidence from other sources that supports the conclusion that structure-dependence is a true linguistic universal, deeply rooted in language design. Research conducted in Milan a decade ago, initiated by Andrea Moro, showed that invented languages keeping to the principle of structure-dependence elicit normal activation in the language areas of the brain, but much simpler systems using linear order in violation of these principles yield diffuse activation, implying that experimental subjects are treating them as a puzzle, not a language. Similar results were found in work by Neil Smith and Ianthi Tsimpli in their investigation of a cognitively deficient but linguistically gifted subject. They also made the interesting observation that [people with average cognitive abilities] can solve the problem if it is presented to them as a puzzle, but not if it is presented as a language, presumably activating the language faculty.

The only plausible conclusion, then, is that structure-dependence is an innate property of the language faculty, an element of the Basic Property. Why should this be so? There is only one known answer, and fortunately, it is the answer we seek for general reasons: The computational operations of language are the simplest possible ones. Again, that is the outcome that we hope to reach on methodological grounds, and that is to be expected in the light of the evidence about evolution of language already mentioned.

What about the so-called representational doctrine about language? What makes it a false idea for human language?

As I mentioned, the conventional view is that atomic elements of language are cultural products, and that the basic ones — those used for referring to the world — are associated with extra-mental entities. This representationalist doctrine has been almost universally adopted in the modern period. The doctrine appears to hold for animal communication: a monkey’s calls, for example, are associated with specific physical events. But the doctrine is radically false for human language, as was recognized as far back as classical Greece.

To illustrate, let’s take the first case that was discussed in pre-Socratic philosophy, the problem posed by Heraclitus: how can we cross the same river twice? To put it differently, why are two appearances understood to be two stages of the same river? Contemporary philosophers have suggested that the problem is solved by taking a river to be a four-dimensional object, but that simply restates the problem: why this object and not some different one, or none at all?

When we look into the question, puzzles abound. Suppose that the flow of the river has been reversed. It is still the same river. Suppose that what is flowing becomes 95 percent arsenic because of discharges from an upstream plant. It is still the same river. The same is true of other quite radical changes in the physical object. On the other hand, with very slight changes it will no longer be a river at all. If its sides are lined with fixed barriers and it is used for oil tankers, it is a canal, not a river. If its surface undergoes a slight phase change and is hardened, a line is painted down the middle, and it is used to commute to town, then it is a highway, no longer a river. Exploring the matter further, we discover that what counts as a river depends on mental acts and constructions. The same is true, quite generally, of even the most elementary concepts: tree, water, house, person, London, or in fact, any of the basic words of human language. Radically, unlike animals, the items of human language and thought uniformly violate the representationalist doctrine.

Furthermore, the intricate knowledge of the means of even the simplest words, let alone others, is acquired virtually without experience. At peak periods of language acquisition, children are acquiring about a word an hour, that is, often on one presentation. It must be, then, that the rich meaning of even the most elementary words is substantially innate. The evolutionary origin of such concepts is a complete mystery, one that may not be resolvable by means available to us.

So we definitely need to distinguish speech from language, right?

Returning to the Galilean challenge, it has to be reformulated to distinguish language from speech, and to distinguish production from internal knowledge — the latter an internal computational system that yields a language of thought, a system that might be remarkably simple, conforming to what the evolutionary record suggests. Secondary processes map the structures of language to one or another sensory-motor system for externalization. These processes appear to be the locus of the complexity and variety of linguistic behavior, and its mutability over time.

There are suggestive recent ideas about the neural basis for the operations of the computational system, and about its possible evolutionary origins. The origin of the atoms of computation, however, remains a complete mystery, as does a major question that concerned those who formulated the Galilean challenge: the Cartesian question of how language can be used in the normal creative way, in a manner appropriate to situations but not caused by them, in ways that are incited and inclined but not compelled,in Cartesian terms. The mystery holds for even the simplest forms of voluntary motion, as discussed earlier.

A great deal has been learned about language since the Biolinguistic Program was initiated. It is fair to say, I think, that more has been learned about the nature of language, and about a very wide variety of typologically different language, than in the entire 2,500 year history of inquiry into language. But as is familiar in the sciences, the more we learn, the more we discover what we do not know. And the more puzzling it seems.

About the author
C.J. Polychroniou is a political economist/political scientist who has taught and worked in universities and research centers in Europe and the United States. His main research interests are in European economic integration, globalization, the political economy of the United States and the deconstruction of neoliberalism’s politico-economic project. He is a regular contributor to Truthout as well as a member of Truthout’s Public Intellectual Project. He has published several books and his articles have appeared in a variety of journals, magazines, newspapers and popular news websites. Many of his publications have been translated into several foreign languages, including Croatian, French, Greek, Italian, Portuguese, Spanish and Turkish

Copyright, Truthout. May not be reprinted without permission.

Previously published on truthout-org