Scholars

David H. Tuggy Turner

Field Linguist

Scarecrow Nouns,
Generalizations, and Cognitive Grammar

David Tuggy

Universidad de las Américas, SIL


[This paper was originally published in 1987, in the Proceedings of the Third Annual Meeting of the Pacific Linguistics Conference (eds. Scott DeLancey and Russell S. Tomlin), p. 307-320. This version for the Internet, prepared in 2001, contains, at least in intent, the original text unchanged except for typographical adjustments. The footnotes are moved to the end; the figures are reproduced by digital scanning (from rather imperfect near-originals).]

0. Introduction

Consideration of English compound nouns like scare-crow raises a number of interesting and closely inter-related problems for linguistic analysis, including the following:

(1) They are exocentric: neither component stem is the “head” of the construction in the typical sense. Rather a verb and its object combine to designate the subject, the thing that does the verb to the object V + O = S). How can this be squared with the typically right-headed structure of English compounds?

(2) Besides the individual lexical items and the general pattern subsuming them V + O = S), the evidence indicates that sub-generalizations (e.g. V + all = S) are relevant. How are such sub-generalizations to be dealt with in the grammar?

(3) In saw-bones the O is plural; in saws-all the V is 3 pers sg., in scoff-law the O is not a normal direct object. How do we analyze such deviant forms without either lumping them indiscriminately with more normal cases or losing the generalizations uniting them to those cases?

(4) Many important and cross-cutting higher generalizations unite V + O = S with other patterns, and many forms fit more than one pattern. How can we keep both all the relevant generalizations and all the relevant distinctions?

We will examine these questions from the standpoints of traditional TG and more modern treatments such as Lieber (1983), Kiparsky (1982), Selkirk (1982), and Williams (1981), for which they are problematic to some degree, and of Cognitive grammar, which leads us to expect the multiplicity of relevant structures and the overlapping, non-absolute categorizations which seem to be indicated by the evidence.

0.1. The classical assumptions

Traditional generative grammar accepted a closely interrelated group of assumptions or ideals, including (1) the expectation that grammars will consist of a relatively small number of absolutely true generalizations, from which (2) particular cases can be completely predicted. Thus (3) when a generalization is achieved, the particulars which it subsumes are to be excised from the grammar. (This of course makes for a simpler grammar, which is highly valued).[1] These assumptions (Absoluteness, Predictivity, Excision) we will call the classical assumptions.

The classical assumptions have been relaxed somewhat: it is now generally recognized that lexicons typically contain individual lexical items along with generalizations which subsume them but do not fully account for them. Yet many analysts continue to operate as if the assumptions were unquestionably correct everywhere else. I believe that much more massive violations of the assumptions are necessary if we are to adequately characterize the data of languages; in fact the classical assumptions amount to an incorrect view of what sort of thing languages are. The scarecrow nouns serve admirably to illustrate this fact.

0.2. Cognitive grammar; schemas and schematic networks

The Cognitive grammar model (CG, nèe Space grammar, Langacker 1981, 1987a, 1987b, etc.; To appear is especially relevant) provides an alternative viewpoint. CG is an attempt to model language as informed by a current understanding of human cognition. It takes seriously the multitudinous, usually reasonable but seldom predictable, cross-cutting interrelationships among, for instance, the different senses of a lexical item. These relationships are mediated by schemas, abstract structures which consist of the cognitive material common to the different cases they subsume. Schemas embody generalizations: generalizations, at least in CG, are schemas. In Figure 1 I have diagrammed a portion of the schematic network associated with the English word head. In this diagram schemas are ranged above their instantiations (sub-cases), and arrows lead from the schemas to their instantiations. An attempt has been made to render the parameter of cognitive salience by thickness of the line enclosing a structure: thus the meaning (HUMAN) HEAD is much more salient, at least in my English, than the meaning HEAD (OF A SHIP).

Several points are worthy of note. (1) Schemas may occur in any depth or number of levels. In fact the lowest (most highly elaborate) structures represented are themselves schemas, generalizations abstracted from many particular perceptual and language-usage events. (2) The criterion for inclusion in or exclusion from the schematic network representing the grammar of a given language is not any sort of a-prioristic simplicity, but rather whether or not a structure is conventional (shared by and known to be shared by the speakers of the language). This is, of course, ultimately a matter of degree, and it is based squarely on usage.[2] (3) Two kinds of elements may lay some claim to primacy for characterizing a structure such as Figure 1. One is high-level (i.e. relatively abstract) schemas such as MOST IMPORTANT PART; the other is highly salient (prototypical) structures such as (HUMAN) HEAD. Neither can claim absolute hegemony: a complete characterization of the whole structure must include both (and others also). (4) A structure may without contradiction exemplify more than one schema; e.g. HUMAN HEAD is an example of the UPPER PART, MOST IMPORTANT PART, and ANIMATE HEAD generalizations, among others.

Under CG the same sort of structure is expected for more complex forms such as compounds and other words, phrases, sentences and so forth. In particular, grammatical rules, being generalizations, are schemas, and they occur in networks or hierarchies similar to that in Figure 1, with many rules of varying degrees of schematicity, and with multitudinous, cross-cutting, reasonable but not strictly predictable relationships uniting them. Such structures will violate the classical assumptions in several ways: note in particular that Excision would have us remove lower-level generalizations (schemas) whenever we have a higher one.

0.3. Scarecrow nouns

Scarecrow nouns include, besides scare-crow itself, such words as break-water, catch-fly, cure-all, dread-nought, kill-joy, pick-pocket, spit-fire, and spend-thrift. These forms have in common the following characteristics: (a) They are composed of a transitive verb followed by a noun or pronoun, and (b) the noun (or pronoun) is understood as the object of the verb. (c) The compound as a whole designates neither the verb nor its object, but the subject of the verb, the thing or person that does the verb to the object. Perhaps less importantly, (d) the verb in every case is uninflected, and (e) the noun is singular.[3] I will assume that it is desirable to express this commonality in the grammar of English by means of an abstract schema or rule, which we will refer to as V + O = S.[4]

I will also take as given that the scarecrow nouns themselves are established as part of the English language, despite Excision. This means under most theories that they are listed in the lexicon. Many of them have significant meaning specifications that are not clearly associated with either component (e.g. the stipulation that a catch-fly is a plant and not e.g. a bird, or the strong expectations that a scare-crow will be some sort of human effigy, and that what it scares crows (and other birds) from is a field of crops). In fact it is not hard to find cases where for many speakers the componentiality of the word is not salient at all: some speakers have often used the word scarecrow but seldom or never thought consciously of it in terms of its components; many express great surprise upon realizing that breakfast or skinflint can be analyzed similarly. I will assume that a form’s componentiality, and thus its potential relevance to most of our discussion, is a matter of degree rather than an absolute difference in kind. I will therefore ignore variations along this parameter, with the understanding that statements depending on componentiality of the forms are true only to the extent that the forms are in fact componential in speakers’ minds.

1. Headedness

The term “head of a construction” has been understood in various ways. The common ground is that the head of a construction is that constituent whose specifications are retained in the construction as a whole (cf. Williams 1981:247); exactly which specifications must be retained for headship to occur is the mooted question.[5] Webster’s 8th, after limiting its definition to “an immediate constituent of a construction that has the same grammatical function as the whole” gives as examples man in “an old man” or “a very old man”. The head at least typically designates the same entity as does the whole construction: “a very old man” is in fact a man. Note as well that team and not football is uncontroversially the head of football team even though both components are nominal and thus would fit Webster’s definition.

Most English compounds are right-headed--e.g. bird is the head of black-bird, since it, and not black, is nominal, and since a black-bird is a bird: similarly ball is the head of soccer-ball, ripe of over-ripe, green of sea-green or blue-green (contrast green-blue), etc. Scarecrow nouns do not clearly fit this pattern, however. Although their rightmost component is nominal, as is the construction as a whole, they do not designate the same kind of entity: a scare-crow is not a crow.

Two tacks have been taken with respect to this problem. Some (e.g. Lieber 1983, Williams 1981) have wanted to take the statistically prevalent right-headed pattern as normative. This can be done for scarecrow nouns only by changing the meaning of “head”. Williams changes it (by fiat) to “rightmost member”.  This of course makes it inevitable that compounds will be right-headed, but it does not clearly achieve anything else of value.[6] Lieber rather assumes that the second component of a scarecrow noun is more like a head as traditionally understood, in that the compound gets its nominal status from it.[7] Thus, for her, scare-crow is a noun because crow is. Aside from the fact that Lieber’s analysis simply does not work,[8] the only way it can be stretched to include scarecrow nouns is to abandon the idea that the head designates the same entity as the construction as a whole. Thus she includes draw-bridge and pick-pocket as syntactically identical V N compounds: the fact that a draw-bridge is a bridge while a pick-pocket is certainly not a pocket is to her insignificant.

This exemplifies an almost inevitable double result of making a generalization absolute. First, a compartmentalization or (to use a buzzword) modularity is forced: data that do not fit the generalization must be separated out as different in kind from data that do. Secondly, there results an inappropriate, indiscriminate lumping together of data to which the generalization is relevant to some degree. In this case, Lieber has to throw out all compounds in which the right-hand component does not have the same syntactic category as the compound itself, because they do not fit her generalization (she suggests “simply listing them with individual lexical entries” rather than deriving them, like “true” compounds, “by regular principles of word formation” (1983:255)). At the same time, she considers draw-bridge and pick-pocket to be the same sort of structure, with no distinction drawn between them, since they both can be made to fit her generalization.

Selkirk (1982), in contrast, simply notes that there are exceptions to the Right-hand Head Rule, and includes scarecrow nouns among them.[9] Nevertheless she discusses the Right-hand Head Rule at length, admitting at least tacitly that such generalizations are important even if not absolute.

I believe this approach is the only workable one.  It would perhaps be nice if English were such that a simple statement “All compounds are right-headed” were both true and non-tautologous, but in fact it is not. It is appropriate to make a generalization to cover the many cases that are right-headed, but it is inappropriate to make it an absolute generalization: other competing generalizations must be allowed their place in the grammar as well.  In this case the generalization V + O = S runs contrary to it, since neither element of the scarecrow nouns is head. The proper way to handle the conflict is to record it: the grammar of English must retain both generalizations, and neither can be allowed to wipe the other out.

Although Selkirk has taken the correct approach, our commonly received linguistic tradition, within which she is working, does not encourage that approach. Most of us have been brought up under the classical assumptions, to expect absolute generalizations, and to strive mightily (as Lieber does) to produce absolutely predictive statements. Under CG, in contrast, we would represent the case at hand by the schematic network in Figure 2, in which the Right-Headed Compound schema has a privileged position because of its salience (a natural result of its statistical predominance), but not an absolute position.   Right-headed compounds are the prototypical type, but not the only type. Under CG this is exactly the kind of thing we should expect.[10]

2. Sub-generalizations

Scarecrow nouns are not commonly coined today, though the pattern was quite productive in the past. It is instructive to leaf through the OED and note what forms are and what forms are not attested. They seem to run in families. Some verbs, for instance, such as add, do not have any forms constructed using them; others may have many. Stretch, for instance, has stretch-gut “glutton”, stretch-halter or stretch-hemp “gallows bird, one who deserves to be hung”, stretch-leg “that which lays prostrate, Death”, stretch-neck “pillory”, and stretch-rope “bell-ringer”. Often a number of forms wind up with very similar meanings: besides the still occasionally used clutch-fist, pinch-penny, and skin-flint, there were many other forms meaning “miser”, including (for nip alone) nip-cake, nip-crumb, nip-cheese, and nip-farthing. A similarly large number meant “criminal”: of these cut-purse, cut-throat, pick-pocket, turn-coat, and, in their social sphere, kill-joy, spoil-sport, and tattle-tale survive, while many other picturesque terms such as stretch-hemp (above) and thatch-gallows have been lost. Among the many forms constructed on lack (e.g. lack-land “younger son”, lack-beard “immature youth”, lack-all “deficient person”), there were at least seven forms meaning “intellectually deficient person”: lack-brain, lack-latin, lack-learning, lack-mind, lack-sense, lack-thought, and lack-wit.

One family of scarecrow nouns seems to be alive and well even today. Besides deceased members such as lack-all and solid citizens such as catch-all, cover-all, cure-all and carry-all, the world of commercialism, with its penchant for extravagant claims, brings into the family such forms as clean-all, copy-all, dispose-all, dust-all, farm-all, fix-all, hide-all, lift-all, saws-all, sticks-all, store-all, tote-all, etc.

Are there no generalizations to be made here? CG says yes, of course, and by all means. We can and should set up schemas such as V + all = S, or, as a subcase, V + all = commercially advertised S, or V + O = criminal, and so forth. (Figure 3 gives a partial schematic network). The evidence is that such low-level generalizations were (and in one case still are) not only conventionalized but in fact productive.[11]

But note that such insights can be captured only by violating the classical assumptions. (Presumably it is for this reason that analysts such as Kiparsky, Lieber, Selkirk, and Williams do not actively pursue these low-level generalizations). Under the classical assumptions listing the particulars is equivalent to losing the generalization, and making the generalization means excising the particulars. Thus either the sub-generalizations like V + all = S must be removed from the grammar, or the higher generalization V + O = S. In the one case (excising sub-generalizations) it becomes very problematical to characterize the actual nature of the class, including its productivity (presently residual and largely confined to the V + all sub-pattern). In the other case you lose the generalization and treat as separate and unrelated patterns that are clearly not so.

In a widely-accepted paper (1976) Jackendoff argued for lexical rules in coexistence with the lexical entries they subsume, which was also impossible under the classical assumption of excision. He proposed a new simplicity metric: rather than counting sheer amount of information in statements (which makes any redundancy very bad), we should count the amount of independent information. Under this metric generalizations, including partial (non-absolute) generalizations, are always good in that they render the generalizable information in particular statements non-independent and therefore non-costly. In fact, the more generalizations, the merrier. This fits in exceedingly well with the CG approach. I only wish it had been followed up more whole-heartedly.[12]

3. Deviant scarecrow nouns

The same issue arises in another form when we consider scarecrow nouns which deviate from the norm. Earlier (Section 0.2) we characterized the V + O = S schema as specifying a non-inflected verb and a singular object nominal. There exist forms which violate these stipulations. Saw-bones has a plural object,[13] and saws-all and sticks-all have inflected verbs. (For the uninitiate, a saws-all is a reciprocating saw used by plumbers and others; sticks-all (spelled Stix-All) is a brand name for a glue.)

It is desirable for our description of these compounds to show such forms as clearly related to the more typical scarecrow pattern, yet as deviant from it. If we try to make our generalizations absolute, and excise all particulars under them, including sub-generalizations, we face a dilemma. We can, of course make a V + O = S rule which does not specify uninflectedness of the verb or singularity of the object nominal. But if we do so we must lose those specifications entirely. We have then no way to express the unusual character of saw-bones or saws-all: pick-pockets or scares-crow should, in principle, be just as good. Alternatively we can keep our original V + O = S rule which excludes saw-bones and saws-all, but then we will have no way to characterize them as related--they will have to be treated as different in kind.

CG, of course, would have us simply make both generalizations, with the original one more highly salient because it is more firmly established by convention. The relevant schematic hierarchy is diagrammed in Figure 4.

Similarly, scoff-law and dispose-all are atypical in that the nominal element in each case is some sort of indirect or prepositional object rather than a prototypical direct object, yet they are clearly to be related to the more typical scarecrow nouns. They also are included in the diagram in Figure 4.

4. Scarecrow nouns and other compound types

The same issues arise even more forcefully when we try to place the scarecrow noun pattern in a wider context of similar structures. There are many such structures, too many to do more than briefly mention some and hint at the relationships between them. Note, however, that the mere recognition of them as “similar” amounts to perceiving a generalization holding between them. CG claims that, to the extent that that similarity is perceived and conventionalized by speakers of the language, it, as embodied in a schema, is part of the grammar of that language.

Consider the compounds tattle-tale and tell-tale.  The former falls under our V + O = S generalization: the latter (at least in my speech) does not; rather than designating the tale-teller directly, it is adjectival, designating the quality or attribute of tale-telling.   There are other such structures, break-neck, catch-penny, and lack-luster among them, and a number of forms can be either nominal or adjectival, such as cut-throat or do-nothing or lick-spittle or stop-gap. The similarity between the two kinds of forms may not be easy to express in some frameworks (in CG it is easy),[14] but it should be clear that a generalization is there to be made: a verb and its object combine to designate either the subject or a quality characterizing the subject.

There are also V + O = N structures where the designatum is not the subject, and again many forms can be construed in more than one way. Is break-fast a V + O = S structure (the food or the occasion that breaks the fast), or is it the occasion or action of breaking the fast V + O = time/occasion/action), or is it the food with which you break the fast (instrument)? Cease-fire and shut-eye would be other V + O = time/occasion/action structures, pas-time, pick-lock and ward-robe would be other V + O = instrument structures. Dodge-ball and lock-jaw would be V + O = sequence of events including V structures, likely along with other analyses. And so forth.

There are O + V = S structures such as cow-poke, door-stop, goat-herd, paper-punch, nail-set, spoke-shave, water-shed, and wind-break, which are like scarecrow nouns except that the order of the stems is reversed. And to go with them are O + V = Adj structures (rip-stop, contrast stop-gap, knee-jerk).

Then there are constructions in which more than two stems are compounded together, such as know-it-all or pick-me-up. These have a V + O and they designate the S, just as do scarecrow nouns, but they also have a particle in them. Other forms lack an O and still designate the S, e.g. stay-at-home or stick-in-the-mud, or ne’er-do-well or die-hard, or the many V + P = S forms like go-between or turn-off or knock-out (as in “she’s a knock-out”). Others consist of a V alone yet designate the S (i>V = S):[15] bore, cheat, cook, flirt, gossip,[16] sneak, tease.[17] Once more there is a generalization to be made VP = S, we might call it), which can be done easily under CG, but which we would at least not be encouraged to do under traditional models.

Another related type are the V + O = O compounds such as push-pin, draw-bridge, pull-toy, etc. These are headed compounds, in fact right-headed, but they can (and must, if the relatedness is to be expressed) be related to the V + O = S nouns under a schema V + O = Central Argument. Then there are the V + S = S compounds (also right-headed), such as copy-cat, cut-grass, dump-truck, pry-bar, scrub-woman, scratch-awl, tow-boat, (note that these all have transitive verbs), cry-baby, hop-toad, play-boy, pop-corn, work-man, etc.[18] And there are V + Instr = Instr forms like tie-rod or tow-rope or clip-board or pick-can or tote-bag. Also there are S + V = Action/Occasion structures (day-break, ship-wreck, sun-rise, sun-set), and S + V = O ones (God-send, cow-lick), and even this does not exhaust the list.[19]

A further group of relatives have instead of a verb, a preposition. Compare cover-all(s) (a V + O = S structure) with over-all(s), a P + O = S structure; also note the recent commercial coinage under-alls. After-noon, under-shirt (to the extent that it means “what you wear under your shirt” rather than “shirt that you wear under other clothes”), upstairs, downstairs, out-of-doors, under-arm are all of this type. There is clearly a generalization to be made.[20] Then, of course, there are the P + S = S cases (under-side, over-coat if it means “coat that goes over your clothes” rather than “what goes over your coat”, after-word, etc.)[21] And P + O = A structures (over-all, under-cover, over-night, above-ground).

5. Conclusion

Figure 5 is an attempt to portray some of the schemas and relationships we have alluded to. As can easily be seen, the generalizations to be made are multitudinous and cross-cutting. To unite all the cases we have mentioned under a single generalization we must resort to the topmost schema, which says that one or more elements combine to make a word. This is of course true of the formation of all words, but it is so schematic that few if any analysts would consider it an adequate account of the data. Yet it, and it alone, is what all the data have in common. So how do we characterize them?

The traditional model encourages us to expect a neat, simple system. Under the influence of that model, and following the lead of analysts such as Kiparsky, Lieber, Selkirk and Williams, we would pick certain sub-generalizations, hopefully with a minimal amount of overlap, and hopefully including the most widespread and productive, as principles of word formation. We may, if we feel we must, mention that there exist exceptions, even systematic exceptions, to these principles, but that is an embarrassing rather than an expected fact. If we can consign those exceptional data to a different “module” of the grammar (e.g. non-productive morphology, pragmatics, etc.), we achieve the highly desirable quality of Predictivity: our chosen generalizations will always hold true for the chosen data, which of course is natural since the data chosen were those for which the generalizations held.   Cross-classifying sister generalizations, lower, sub-dividing generalizations, and higher uniting generalizations, are to be ignored, insofar as possible, unless perhaps some of the latter can be preserved in some other module of the grammar. The resulting model will be relatively simple and straightforward, but it will fit the data uncomfortably at best.

Under CG, however, any or all of the generalizations and particular items will, to the extent that they are through usage entrenched as conventional, become part of the grammar. The cost is that the system is not neat and simple, since it permits the redundancy of listing both generalizations and the particulars they subsume, nor is it strictly predictive, since none of the generalizations must hold in order for a structure to be grammatical. But the gains are great: we wind up with a system that fits the data, rather than one that mutilates them in order to make them fit.


References

Jackendoff, Ray 1976. “Morphological and Semantic Regularities in the Lexicon.” Language 51.639-671.

Kiparsky, Paul. 1982. “Lexical Morphology and Phonology.” In Linguistic Society of Korea, ed., Linguistics in the Morning Calm, 3-91. Seoul: Hanshin Publishing Co.

Koutsoudas, Andreas. 1966. Writing Transformational Grammars: an Introduction. New York: McGraw Hill.

Langacker, Ronald W. 1982. “Space Grammar, Analysability, and the English Passive.” Language 58.22-80.

------. 1987a. “Nouns and Verbs.” Language 63.53-94.

------. 1987b. Foundations of Cognitive Grammar, Vol. 1. Grammatical Prerequisites. Stanford: Stanford University Press.

------. To appear. “A Usage-Based Model.” In Brygida Rudzka-Ostyn, ed., Topics in Cognitive Linguistics. Amsterdam and Philadelphia: John Benjamins.

Lieber, Rochelle. 1983. “Argument Linking and Compounds in English”. Linguistic Inquiry 14.251-285.

Selkirk, Elizabeth O. 1982. The Syntax of Words. Linguistic Inquiry Monographs #7. Cambridge, Mass: MIT press.

Williams, E. 1981. “On the Notions ‘Lexically Related’ and ‘Head of a Word’”. Linguistic Inquiry >12.245-274.

Footnotes

[1]In fact, achieving a generalization and simplifying the grammar were often considered virtually synonymous. E.g. Koutsoudas (1966:54-55) “The generality of a grammar will be our criterion for choosing between two or more grammars of a language. By generality we mean ‘that which accounts for the greatest number of cases,’ and our formal measurement of the generality of a grammar will be in terms of simplicity.”

[2]The schema-mediated relationships established by usage in speakers’ minds and thus in their language may or may not parallel the (etymological) relationships that prompted the establishment of the related senses in the first place. Some English speakers know and others correctly surmise that a SHIP’S BATHROOM is called a head because it was usually in the forward part of the ship: others may well construe other relationships, and for many the two may be completely separate meanings, united only by accidental homophony and homography.}

[3]Note too that all the verbs are monosyllabic. In fact the vast majority of the nouns are, as well, and I would claim that the prototypical pattern would make those specifications. Yet there are exceptions: carry-all and tattle-tale have disyllabic verbs, and break-water and pick-pocket disyllabic nouns.

[4]The details of how this rule is formulated and integrated with other aspects of English grammar will of course depend largely on the theory in which one is working. It is quite conceivable (and under the classical assumptions, highly desirable) that it should be treated as a mere subcase of a more basic rule, and thus be subject to excision from the grammar.

[5]Under CG headship can be taken as a matter of degree, with no one specification absolutely necessary for some degree of headship to hold. In any case, identity of semantic designation (which in CG cannot be separated from grammatical category) is the primary characteristic involved.

[6]This is not entirely fair to Williams. He does offer his Righthand Head Rule as a definition:  ”we define the head of a morphologically complex word to be the righthand member of that word. ... Call this definition the Righthand Head Rule.” However, in the section where he attempts to “invest the notion ‘head of a word’ with some empirical content”, he gives several non-absolute characterizations (“it is generally the case that a suffix determines the category...”, “prefixes do not in general determine the category...” (p. 248) “for most compounds--the righthand member determines the category” (p. 249)), and he winds up speaking of unheaded (exocentric) and left-headed compounds, both incoherent concepts if “head” really means “rightmost member”, and calls them “exceptions to the claim that the head is rightmost in all words”. (He does not indicate whether he considers scarecrow nouns to be exocentric or not.) It becomes evident that by “define” he means something like “establish as the default case”: he has confirmed this in personal communication.

[7]Kiparsky (1982:6) also takes headship to mean (only) that “the category of a derived word is always non-distinct from the category of its head”. He devotes little discussion to this matter, but mentions that English words are “usually” right-headed (p. 6, cf. footnote 1). His discussion of scarecrow nouns and similar structures on p. 16 says nothing to indicate that they are not also right-headed, and he does insist that they (and all compounds) are headed. He later (p. 20) derives scarecrow nouns by insertion of the same zero nominalizing suffix which is used to derive bore or gossip (p. 7) or chimney-sweep or door-stop (p. 19), so the nominal status of these forms is not ultimately derived from that of the right-hand element, as in Lieber’s analysis. Bore, chimney-sweep, et al. are verb-headed before the suffixation, but Kiparsky’s formulation does not encourage us to think that the scarecrow nouns are.

[8]Lieber’s Convention IV (1983:253ff), which is “exceptionless” (p. 254), states that features from the right-hand stem percolate up to the node dominating the compound. It is not clear what features do or do not percolate, though syntactic category features clearly do, and gender and argument structure features apparently do also (p. 253). If in fact “all” features (p. 252-253, Conventions I and II), including semantic features, percolate, then scarecrow nouns are obvious exceptions. If plurality is a percolating feature, then saw-bones is an exception, since the compound is singular although its second component is plural. In Spanish forms such as mata-moscas (kills-flies) “fly-swatter” the righthand element, which being a noun is the obvious source for percolation, is both plural and feminine, though the compound is both masculine and singular. In any case, lack-luster has no adjectives in it, yet is adjectival, and right-hand would have to get its features from right (which is left) rather than hand (which would be right, for Lieber). There are many other types of exceptions, comprising hundreds of examples, to Lieber’s generalization (e.g. the V P combinations she mentions in her footnote 6 (p. 255) or the nouns formed on them, or the productive numeral + standard measuring unit N = Adj or container N + ful(l) = measurement N patterns). The rule, if made absolute, simply doesn’t work. Lieber, like Williams, ultimately has recourse to fiat declaration: the exceptions are not true compounds of English because “true compounds in English adopt the category of the second stem” (fn. 6, p. 255). I.e. her Percolation Convention IV works for all English compounds, and anything it doesn’t work for is ipso facto not an English compound.

[9]It is not clear to me why at least some scarecrow nouns do not fall under Selkirk’s version of the Right-hand Head Rule (p. 20), which specifies that the head is the rightmost component which shares the “syntactic” and “diacritic” (but apparently not semantic) features of the compound as a whole, including “features for tense, for example, or ... case features.” (p. 21). Thus by her definition of “head”, crow would presumably be the head of scare-crow in He saw a scarecrow. Yet she clearly excludes these nouns (p. 26), apparently because the second member and the compound as a whole do not designate the same kind of entity: “Cutthroat does not designate a throat, but rather someone who cuts throats.” In other words she is in practice including the identity of (semantic) designation (p. 22) as well as identity of syntactic features as definitional for headship. I believe that is the correct approach, but it should have been stated explicitly in defining “head”.

[10]Langacker (1987b:290) makes this explicit: “the schema describing the basic pattern for English compounds identifies the second member of the compound as the profile determinant [read ‘head’]: football thus designates a ball rather than a body part ...blackbird is a noun rather than an adjective, and so on. Such schemas embody the generalizations observable in specific combinations ... Note, however, that in a usage-based model [such as CG] these schemas are not invalidated by individual expressions having conflicting properties, nor do they preclude the possibility of alternate constructions with opposite specifications”. The scarecrow noun pickpocket is cited as an example of such a conflicting structure (p. 291).

[11]Under CG schemas at several different levels may be simultaneously active in sanctioning the production of a novel form: thus it is not correct to assume that if one is acting productively the others are not. Rather the effect is cumulative: scrub-all is sanctioned by both V + O = S and V + all = S, besides by V + all = commercially advertisable S. This makes it much more clearly English than, say, twist-tongue or play-guitar or drive-truck, which would only be sanctioned by the highest schema. Sanction can be thought of as varying directly with the salience of the sanctioning schema and with its productivity (the extent to which speakers are used to calling on it for sanction), and inversely with the elaborative distance between it and the sanctioned form. Any linguistic structure sanctions itself, strongly to the degree it is entrenched, and at the minimum possible elaborative distance, namely zero. (See Langacker 1987b, Ch. 11.)

[12]Actually, Jackendoff goes beyond what CG would--his criterion says that absolutely every generalization possible should be included in the grammar (though he does envision some sort of cost whereby a generalization uniting only two or three subcases would be less highly valued and might not be included). CG says that people do generalize freely, but that only those generalizations which they in fact make and which become conventional are to be included in the grammar.

[13]The obvious reason for the plurality is that a saw-bones is (or was) likely to saw more than one bone in his career. But a pick-pocket is even more likely to pick more than one pocket, yet we do not call him a pick-pockets. The plurality of bones in saw-bones, then (or the singularity of pocket in pick-pocket), is not strictly predictable: it is only reasonable, reflecting the reasonable but not predictable behavior of those who coined the usage and got it started on the road to conventionalization. Interestingly, in the highly productive (Mexican) Spanish V + O = S construction the object, if a count noun, is virtually always plural, even in cases where only one object will normally exist (e.g. cierra-puertas (closes-doors) “automatic door closer”, or trota-mundos (trots-worlds) “globe-trotter”). Similarly it can be argued that the verb in all the Spanish forms is inflected for 3 pers sg subj and present tense, like the verb in saws-all.

[14]Instead of designating a (stative) Thing, characterized as subject of a (processual) Relation, the adjective designates the Relation (in stative form), with the same Thing as its most prominent member (its subject). It is basically a matter of changing the type of designation, leaving the same Thing as most salient in either case.

[15]Kiparsky derives these forms by a zero nominalizing (agentive) suffix, and applies the same suffix to scarecrow nouns as well, and to the O + V = S and V + P/Adv = S types. There is something right about this. It allows Kiparsky to keep the VP = S generalization: all these forms, although compositionally quite different (apparently not all verb-headed for him), are subject to the same suffixation rule. (Note that this is one type of generalization that need not be absolute under the traditional model: morphemes are not expected to occur on every form.) We have treated the generalization as a pattern of semantico/syntactic extension rather than as a case of a zero morpheme: Langacker (1987b:471-474) shows that under CG the two analyses are in fact exactly equivalent.

[16]Whether the verb came etymologically from the noun or vice versa is not at issue; the crucial datum is the perception of (at least some) present-day speakers that the verb is basic and the noun derived from it.

[17]Richard Rhodes has pointed out (p.c.) that the negative cast of many of these forms coincides with the negative cast of many scarecrow nouns (recall the “criminal” and “miser” types discussed in Section 2, cf. Fig. 3). He is right, of course: here is yet another non-absolute generalization that should be captured.

[18]These forms are proscribed by Lieber (1983:261-2) and Kiparsky (1982:16), apparently on the strength of the fact that such forms as ?die-man or ?fall-sheep sound unlikely. (Fall-guy does occur, and work-man.) Kiparsky credits the supposed absence of such compounds to his First Sister Principle, which says that anything occurring with a verb in a compound must be its first (syntactically unexpressed) argument: subjects are apparently not arguments in his system. Lieber’s constraint requiring that the non-head element’s valences be satisfied within the compound stars all the cases with transitive verbs, and they would also by starred, with the intransitive verb cases, by her requirement that any non-object noun compounded with a verb “must be interpretable as a semantic argument” of the verb, “i.e. as a Locative, Manner, Agentive, Instrumental, or Benefactive argument” (p. 258) but not as subject, since “the external argument or subject is never linked in a compound.” The V + S = S forms mentioned above cannot then be compounds, or if they are the S is something other than a subject. This is another example of generalizations being made so absolute as to eliminate perfectly good data from consideration.

Selkirk’s system also proscribes these forms: “(2.31) The SUBJ argument of a lexical item may not be satisfied in compound structure” (1982:34). She, however, does note their existence (p. 24-25), and suggests deriving them via nominalization of the verb. Thus scrub-woman would be a N-N compound interpretable as woman “which has something to do with” scrubbing. This meaning then “can pragmatically be made somewhat more specific, approaching an argument-like interpretation.” I.e. we blind the syntax to the obvious and leave it to the pragmatics to tell us that what the woman has to do with scrubbing is doing it. This sort of roundabout analysis will let you get around almost any strictures; the only reason for it is to preserve the absoluteness of her rule 2.31. In a few cases (e.g. work-man), the independent existence of such a nominalized form of the verb makes the analysis more plausible, but even in those cases it does not necessarily reflect how people actually construe the forms.  Most speakers will paraphrase work-man as “man who works” rather than “man who does work”.

[19]Each of the following represents a related type not mentioned above: back-splash, break-front, grab-bag, heart-throb, hog-wash, land-fall, land-fill, play-ground, play-time, scatter-brain, swim-suit.

Not only are the patterns multitudinous, but many cases can be analyzed under more than one pattern. Two such cases are hang-man and dare-devil: Hu Mathews (p.c.) suggested hang-man as a typical scare-crow noun (the hangman hangs a man), while I had always construed it as a V + S = S structure (the hangman is a man who hangs people) (Selkirk 1982:24 apparently agrees.) Similarly a dare-devil may be taken to be a devil who dares to do anything (V + S = S) or a man who, by reckless action, dares the devil (V + O = S; cf. Selkirk 1982:26). I have found for each analysis in both cases several native speakers who claim they had always understood the word so: some (like me) have learned to make both analyses. In any case, both dare-devil and hang-man are clearly established in their own right; they can be sanctioned in speakers’ minds by either or both or neither schema without losing their status as English words.

Those so inclined might calculate the different ways make and work may be reasonably construed as contributing to the meaning of the compound make-work: there are at least six.

[20]In CG the schema covering these cases is identical to the V + O = S schema except that a stative rather than a processual transitive Relation--i.e. a P rather than a transitive V--is specified: thus the schema covering both types of cases will simply specify a transitive Relation, whether processual or stative.

[21]P + O = O structures are not easy to come by, perhaps because so many prepositions have opposites in which the S and O roles are switched, so a P + O = S structure with the opposite preposition yields the desired meaning. Down-spout may be a good example, if it means “spout down which something goes” rather than “spout which points down”. Mary Ellen Ryder suggests (p.c.) down staircase, as in don’t go up the down staircase, which is clearly right except that it is not clear that down staircase is a compound.

Back to the top
To David Tuggy’s home page
A la página principal de David Tuggy