As I continue to prepare my paper on the semantics of compounding, I realized that I had a mostly finished review of Charles Ruhl’s (1989) On monosemy: A study in linguistic semantics that I had never posted. This will be appearing in (now) six parts over the next two weeks. I’ll provide a full bibliography in the final post, but hyperlinks for each citation appear at first reference. The goal here is to (1) present my thoughts about monosemy as a theoretical construct and (2) maintain a regular posting schedule while I am also devoting more of my time to getting my SBL paper finished on ὑδροποτέω and compounding. This is going to be a deep dive.
- Part 1: The original problem
- Part 2: Semantics & Pragmatics (to appear Nov 6th, 2018)
- Part 3:The ideal speaker-listener (to appear: Nov (8th, 2018)
- Part 4: Moving Greek letters (to appear: Nov (11th, 2018)
- Part 5: The methodological gap (to appear Nov 14th, 2018)
- Part 6: Unanswered questions (to appear: Nov 17th, 2018)
The ideal speaker-listener
The concept of linguistic competence plays a major role for much of Ruhl’s discussion, though without a sufficient background knowledge in 1960’s and 70’s linguistics, this reality can be extremely difficult to catch, oddly enough. Ruhl isn’t writing to an audience of students or interdisciplinary scholars. He’s writing to his colleagues. Often times, you might encounter a few sentences about IDEAL (Ruhl’s short hand for Chomsky’s ideal speaker-listener), followed by a multi-page discussion of data, followed by a simple statement of conclusion. The trick is that unless you’re fully conversant in generative grammar, you might not realize that the linguistic data discussion is not the point of a given section of Ruhl’s book. This can only cause confusion.
The idea of Linguistic Competence goes back to a small section at the very beginning of Chomsky’s (1965) Aspects, which is perhaps worth quoting at length here.
Linguistic theory is concerned primarily with an ideal speaker-listener, in a completely homogeneous speech-community, who knows its (the speech community’s) language perfectly and is unaffected by such grammatically irrelevant conditions as memory limitations, distractions, shifts of attention and interest, and errors (random or characteristic) in applying his knowledge of this language in actual performance.
We thus make a fundamental distinction between competence (the speaker-hearer’s knowledge of his language) and performance (the actual use of language in concrete situations). Only under the idealization set forth in the preceding paragraph is performance a direct reflection of competence.*
*At first blush, this statement feels like the same distinction that Saussure makes between lingua and parole, but this is a highly problematic conflation that needs to be avoided (Hewson 1976; Gordon 2004).
For generative grammar, the ideal speaker is sort scientific control. Remove the external variables of performance makes it possible, in its view, to study the object (language) without contamination from non-linguistic factors. The big change for Ruhl from Chomsky (1965) is that Ruhl wants to take seriously some of the critiques made against ignoring or avoiding performance to instead using language usage data in order to better make judgments about what kinds of usage are a part of actual language competence and what is not. This is a distinct improvement over Chomsky’s (1965) approach.
But again, caution is necessary. Ruhl’s method is more about determining which information about a word’s meaning is inherent and what is overly affected by performance. He often provides the appearance of being highly invested in examining natural language in use, performance, on one page before shifting elsewhere to avoiding it. Thus, on the one hand: “This Monosemic Bias implies a priority of research: a full detailed exploration of a word’s variant range before considering its possible paraphrase relationships with other lexical items” (Ruhl 1989, 4). This sounds excellent. If this were all that Monosemic Bias was, then all semanticists are monosemy advocates already. The debate is not just over, but it never existed. The devil is in the details, however, because Ruhl’s focus is still upon linguistic competence and especially important: the mythic ideal speaker-listener envisioned by Chomsky (1967, 3). Chomsky has the advantage in his simple CTG view of syntax such that describing this IDEAL (Ruhl regularly put this is all caps) only requires a paragraph. Ruhl, on the other hand, with the complexity of meaning has multiple extended discussions on the question of just what kind of information falls into ideal competence and what does not. The full detailed exploration of a word’s variant range does not include slang, for example (Ruhl 1989, 69). Slang is not part of the IDEAL. Nor are the processes metaphor and metonymy, especially in the context of diach, though the potential for metaphor and metonymy are (69). Apparently this is somehow akin to the way iron has the potential for magnetism (76). So already, we are not exploring a word full variant range. No, this is not accurate. From Ruhl’s perspective, the entirety of a words range must be explored in order to filter out the wheat from the chaff. slang, metaphor, metonymy, and others, these are the chaff.
What about the words themselves? Here Ruhl grows more difficult to understand as the level of abstraction grows.
Consider several complications in the most minimal requirements of IDEAL. First, the “basic elements” (words) are not as unproblematic as we require. IDEAL is abstract, and these basic elements must be correspondingly abstract. I am arguing that, individually, they vary in their degree of abstractness and direct relationship to reality. Certainly, as they are defined in dictionaries, they have properties that are too concrete for IDEAL. If we commit Whitehead’s fallacy and make IDEAL too concrete, individual words will seem to be more fully part of it, but we will then be unable to capture the generalizations that IDEAL was designed to make.
A sentence is more concrete than its constituents, because each constituent is potentially all the sentences in which it can occur. By starting with the sentence and then dividing it into parts, TG moves from (relatively) concrete to (more) abstract. With each phrase structure rule, something is lost. Further, the sentence is more abstract than all the discourses of which it can be a part. If, as in TG, we take the sentence as axiomatic, we must expect that with each discourse, something is gained. In both instances, we must resist the temptation to make word, sentence and discourse equal in degree of abstraction; when we do this, the inevitable result is polysemy (142-3).
So, are individual words part of linguistic competence (IDEAL)? The paragraph seems to only confuse the issue at first blush. The answer, for Ruhl, is more or less: “No.”—and certainly not if we are talking about published dictionaries. These are too concrete in their accounts of meaning.**
** Note, this assertion is itself, a metaphorical utterance.
As he says in his preface:
I hope to show that the most functional part of our linguistic resourcefulness is a highly abstract, formal and autonomous collection of categories that seem overwhelmingly baffling and unreal to our conscious understanding (xiv).***
*** Given the words “formal” and “autonomous,” Ruhl’s use of “functional” here cannot have its technical sense, but rather more along the lines of “useful” or “useable.”
Here he is not talking about words, but abstract categories. Words are not innate, but their abstract meanings are, instead. The meaning of a word must necessarily be highly abstract if it is to be part of competency and thus, in turn, innate to the human language capacity. In the larger, picture of semantic theory, this is a fascinating balancing act.
Readers who know this the history of generative grammar will remember the debates of the 1970s as to whether or not semantics was interpretive or generative. The latter, a group of excommunicated students of Chomsky suggested that meaning is what existed in the brain and motivated syntactic structure, while Chomsky himself and other followers argued that all of semantics was simply a product of interpreting syntactic structure with lexical elements inserted into the tree structures. Ruhl is, in a sense, cutting a gordian knot here. Meaning is in the brain, but it is so incredibly abstract that the meaning of a sentence is still interpretive. But then, when its interpreted it is now pragmatics in Ruhl’s view of things. His theory is then an interpretive pragmatics, rather than Chomsky’s interpretive semantics, which in turn is used to discover the highly abstract semantic categories that exist innately in the human mind.