Generative Semantics

Generative semantics is (or perhaps was) a research program within linguistics, initiated by the work of George Lakoff, John R. Ross, Paul Postal and later McCawley. The approach developed out of transformational generative grammar in the mid 1960s, but stood largely in opposition to work by Noam Chomsky and his students. The nature and genesis of the program are a matter of some controversy and have been extensively debated. Generative semanticists took Chomsky's concept of deep structure and ran with it, assuming (contrary to later work by Chomsky and Ray Jackendoff) that deep structures were the sole input to semantic interpretation. This assumption, combined with a tendency to consider a wider rang of empirical evidence than Chomskyan linguists, lead generative semanticists to develop considerably more abstract and complex theories of deep structure than those advocated by Chomsky and his students. Throughout the late 1960s and 1970s, there were heated debates between generative semanticists and more orthodox Chomskyans. The generative semanticists lost the debate, in so far as their research program ground to a halt by the 1980s. However, this was in part because the interests of key generative semanticists such as George Lakoff had gradually shifted away from the narrow study of syntax and semantics. A number of ideas from later work in generative semantics have been incorporated into cognitive linguistics (and indeed into main stream Chomskyan linguistics, often without citation).


Generative Semantics
Generative semantics accounts for meaning directly, not through syntactic structure.In generative semantics, a descriptive grammar begins with a deep structure that is semantic and, to some extent, pragmatic.This deep structure consists of combinations of semantic features, semantic relations, performatives, and presuppositions.Deep structures are then subject to lexical insertions and transformations to ultimately yield surface structures, which then serve as the structures to which the rules of the phonological component apply.
In a generative semantic account of a language, all meaning is present in this deep structure (sometimes called logical structure in order to distinguish it from the syntactic deep structures of interpretive semantics).Syntactic constituent structure rules do not produce the deep logical structures and transformations never result in changes of the meaning of a sentence.Furthermore, since this deep structure is purely semantic, generative semantics appears to be a clever means for describing paraphrase and ambiguity: both for syntax and for lexical items.This is particularly clear when we consider that some paraphrase relations hold between a single lexical item and a phrase with syntactic structure.Consider the following sentences: 1) In the old westerns, the hero would always kill his opponent in a gunfight.
2) In the old westerns, the hero would always cause his opponent to die in a gunfight.
Although they, are stylistically distinct, (1) and (2) can be understood as paraphrases of one another.Yet, these two surface structures are very different syntactically.(1) contains the single lexical item kill, while the corresponding portion of (2), cause to die, is a phrase.In interpretive semantics, the rules of semantic interpretation can be stated in such a way as to provide the same interpretation for kill and cause to die.In generative semantics, however, the issue can be handled more directly -the corresponding elements simply have the same deep semantic structure, a possible solution since deep structure in generative semantics does not include any syntactic information.
One difficulty with the generative semantics approach is its failure, up to the present time, to provide a detailed account of how the semantic deep structures are converted into syntactic structures.However, interpretive semantics has been criticized for failure to provide a sufficiently formal account of the rules of semantics interpretation and the principles by means of which the theory incorporates information about presupposition, illocutionary force, and semantic relations.
Chomsky, Katz and Fodor (1963) argued that the syntactically motivated deep structure presents the only structure applicable to the semantic interpretive components of the grammar.In contrast, the proponents of generative semantics maintained that semantic structures are generated in a form of basic (universal) rules similar to those of predicate logic.The meaning of individual lexemes is described as a syntactically structured complex of basic semantic elements.For example, the verb convince (x convinces y to do z) is paraphrased by x does that y wants that z, where do and want are atomic predicates which form more complex predicates through transformations.In addition, the number of syntactic categories is reduced to three: S (= proposition), NP (= argument), and V (= predicate).Since logical-semantic form of the sentence is now seen as the underlying (generative) structure, the otherwise strict division between syntax and semantics collapses, especially between lexical semantics, word formation and the semantics of propositions.Critics of generative semantics points out the ad hoc nature of the descriptive mechanism and the 'overpowerful' generative power of this model, whose apparatus could generate more complex structure than are realized in human languages.
The leading idea of generative semantics is that there is no principled distinction between syntactic processes and semantic processes.This notion was accompanied by a number of subsidiary hypotheses: first, that the purely syntactic level of 'deep structure' posited in Chomsky's 1965 book Aspects of the Theory of Syntax cannot exist; second, that the initial representations of derivations are logical representations which are identical from language to language (the universal-base hypothesis); third, all aspects of meaning are representable in phrase-marker form.In other words, the derivation of a sentence is a direct transformational mapping from semantics to surface structure.Figure 1 represents the initial (1967) generative semantics model.
In its initial stages, generative semantics did not question the major assumptions of Chomsky's Aspects theory; indeed, it attempted to carry them through to their logical conclusion.For example, Chomsky had written that 'the syntactic component of a grammar must specify, for each sentence, a deep structure that determines its semantic representation ' (1965, p. 16).Since the late 1960s little elaborative work was done to specify any interpretive mechanisms by which the deep structure might be mapped onto meaning, Lakoff and others took the word 'determines' in its most literal sense and simply equated the two levels.Along the same lines, Chomsky's (tentative) hypothesis that selectional restrictions were to be stated at deep structure also led to that level's being conflated with semantic representation.Since sentences such as (3a) and (3b), for example, share several selectional propertiesthe possible subjects of from and so on -it was reasoned that the two sentences has to share deep structures.But if such were the case, generative semanticists reasoned, then that deep structure would have to be so close to the semantic representation of the two sentences that it would be pointless to distinguish the two levels.
3) (a) Mary sold the book to John.
(b) John bought the book from Mary.
As figure 1 indicates, the question of how and where lexical items entered the derivation was a topic of controversy in generative semantics.McCawley (1968) dealt with this problem by treating lexical entries themselves as structured composites of semantic material (the theory of lexical decomposition), and thus offered (2) as the entry for kill: (Diagram1) After the transformational rules had created a substructure in the derivation that matched the structure of a lexical entry, the phonological matrix of that entry would be insertable into the derivation.McCawley hesitantly suggested that lexical-insertion transformations might apply in a block after the application of the cyclic rules; however, generative semanticists never did agree on the locus of lexical insertion, nor even whether it occurred at some independently definable level at all.
Generative semanticists realized that their rejection of the level of deep structure would be little more than word-playing if the transformational mapping from semantics representation to surface structure turned out to be characterized by a major break before the application of the familiar cyclic rules -particularly if the natural location for the insertion of lexical items was precisely at this break.They therefore constructed a number of arguments to show that no such break existed.The most compelling were moulded after Morris Hall's classic argument against the structuralist phoneme (Halle, 1959).Paralleling Halle's style of argumentation, generative semanticists attempted to show that the existence of a level of deep structure distinct from semantic representation would demand that the same generalization be stated twice, once in the syntax and once in the semantics.
Since a simple transformational mapping from semantics to the surface entails that no transformation can change meaning, any examples that tended to show that such rules were meaning changing presented a profound challenge to generative semantics.Yet such examples had long been known to exist: for example, passive sentences containing multiple quantifiers differ in meaning from their corresponding actives.The scope differences between (4a) and (4b), for example, seem to suggest that Passive is a meaning-changing transformation: 4) (a) Many men read few books.
(b) Few books were read by many men.
The solution to this problem put forward by Lakoff (1971a) was to supplement the strict transformational derivation with another type of rule -a global rule -which has the ability to state generalizations between derivationally non-adjacent phrase markers.Examples (4a-b) were handled by a global rule that says that if one logical element has wider scope than another in semantic representation, then it must precede it in surface structure.This proposal had the virtue of allowing both the hypothesis that transformations are meaning preserving and the hypothesis that the deepest syntactic level is semantic representation to be technically maintained.
Soon many examples of other types of processes were found which could not be stated in strict transformational terms, but seemed instead to involve global relations.These involved presupposition, case assignment, and contractions, among other phenomena.
In the late 1960s, the generative semanticists began to realize that as deep structure was pushed back, the inventory of syntactic categories became more and more reduced.And those remaining categories bore a close correspondence to the categories of symbolic logic.The three categories whose existence generative semanticists were certain of in this period -sentence, noun phrase, and verb -seemed to correspond directly to the proposition, argument, and predicate of logic.Logical connectives were incorporated into the class of predicates, as were quantifiers.This was an exhilarating discovery for generative semanticists and indicated to them more than anything else that they were on the right track.For, now, the deepest level of representation had a 'natural' language-independent basis, rooted in what Boole (1854) had called 'The Laws of Thought'.What is more, syntactic work in languages other than English was leading to the same three basic categories for all languages.The universal base hypothesis, not surprisingly, was seen as one of the most attractive features of generative semantics.
The development of generative semantics in the early 1970s was marked by a continuous elaboration and enrichment of the theoretical devices that it employed in grammatical description.By 1972, George Lakoff's conception of grammatical organization appeared as in Figure 2 (an oversimplified diagram based on the discussion in Lakoff, (1974).
This elaboration was necessitated by the steady expansion of the type of phenomena that generative semanticists felt required a 'grammatical' treatment.As the scope of formal grammar expanded, so did the number of formal devices and their power.Arguments motivating such devices invariably took the following form: 5) (a) Phenomenon P has in the past been considered to be simply 'pragmatic', that is, part of performance and hence not requiring treatment within formal grammar.
(b) But P is reflected both in morpheme distribution and in the 'grammaticality' judgments that speakers are able to provide.
(c) If anything is the task of the grammarian, it is the explanation of native-speaker judgments and the distribution of morphemes in a language.Therefore, P must be handled in the grammar.
(d) But the grammatical devices now available are insufficient for this task.Therefore,new devices of greater power must be added.John R. Ross (1970) andJerold Sadock (1974) were the first to argue that what in the past had been considered to be 'pragmatic' phenomena were amenable to grammatical treatment.Both linguists, for example, argued that the type of speech act which a sentence represents should be encoded directly in its semantic representation.i.e. its underlying syntactic structure.Analogously, George Lakoff (1971b) arrived at the conclusion that a speaker's beliefs about the world needed to be encoded into syntactic structure, on the basis of the attempt to account syntactically for judgments such as the following, which he explicitly regarded as 'grammaticality' judgments: 6) (a) John told Mary that she was ugly and then she insulted him.
(b) * John told Mary that she was beautiful and then she insulted him.
He also argued that in order to provide a full account of the possible antecedents of anaphoric expressions, even deductive reasoning had to enter into grammatical description (1971c).As Lakoff pointed out, the antecedent of too in (7) 'the mayor is honest', is not present in the logical structure of the sentence, but must be deduced from it and its associated presupposition, 'Republicans are honest': 7) The mayor is a Republican and the used-dealer is honest too.
The deduction, then, was to be performed in the grammar itself.
Finally, Lakoff (1973) concluded that the graded nature of speaker judgments falsifies the notion that sentences should be either generated, i.e. be considered 'grammatical', or not generated, i.e. be treated as 'ungrammatical'.Lakoff suggested instead that a mechanism be devised to assign grammaticality to a certain degree.The particulars of fuzzy grammar, as it was called, were explored in a series of papers by John R. Ross (see especially Ross, 1973).
Not surprisingly, as the class of 'grammatical' phenomena increased, the competence-performance dichotomy became correspondingly cloudy.George Lakoff made it explicit that the domain of grammatical theory was no less than the domain of linguistics itself.Grammar, for Lakoff (1974, pp. 159-61), was to specify the conditions under which sentences can be appropriately used.…..One thing that one might ask is whether there is anything that does not enter into rules of grammar.For example, there are certain concepts from the study of social interaction that are part of grammar, e.g.relative social status, politeness, formality, etc.Even such an abstract notion as free goods enters into rules of grammar.Free goods are things (including information) that every one in a group has a right to.(Italics in original) Since it is hard to imagine what might not affect the appropriateness of an utterance in actual discourse, the generative-semantic program with great rapidity moved from the task of grammar construction to that of observing language in its external setting.By the mid 1970s, most generative semanticists had ceased proposing explicit grammatical rules altogether.The idea that any conceivable phenomenon might influence such rules made doing so a thorough impracticality.
As noted above, generative semantics had collapsed well before the end of the 1970s.to a great extent, this was because its opponents were able to show that its assumptions led to a too complicated account of the phenomenon under analysis.For example, interpretivists showed that the purported reduction by generative semantics of the inventory of syntactic categories to three was illusory.As they pointed out, there is a difference between nouns, verbs, adjectives, adverbs, quantifiers, prepositions, and so on in surface structure, regardless of what is needed at the most underlying level.Hence, generative semantics would need to posit special transformations to create derived categories, i.e. categories other than verb, sentence, and noun phrase.Along the same lines, generative semantics never really succeeded in accounting for the primary function of the renounced level of deep structure -the specification of morpheme order.As most syntacticians soon realized, the order of articles, adjectives, negatives, numerals, nouns, and noun complements within a noun phrase is not predictable, or even statable, on semantic grounds.How then could generative semantics state morpheme order?Only, it seemed, by supplementing the transformational rules with a close-to-the-surface filter that functioned to mimic the phrase-structure rules of a theory with the level of deep structure.Thus, despite its rhetorical abandonment of deep structure, generative semantics would end up slipping that level in through the back door.
The interpretive account of 'global' phenomena, as well, came to be preferred over the generative-semantic treatment.In general, the former involved coindexing mechanisms, such as traces, that codified one stage of a derivation for reference by a later stage.In one sense, such mechanisms were simply formalizations of the global rules they were intended to replace.Nevertheless, since they involved the most minimal extensions of already existing theoretical devices, solutions involving them, it seemed, could be achieved without increasing the power of the theory.Coindexing approaches came to be more and more favored over global approaches since they enabled the phenomenon under investigation to be concretized and, in many cases, pointed the way to a principled solution.
Finally, by the end of the decade, virtually nobody accepted the generative-semantic attempt to handle all pragmatic phenomena grammatically.The mid and late 1970s saw an accelerating number of papers and books which cast into doubt the possibility of one homogeneous syntax-semantics-pragmatics and its consequent abandonment of the competence-performance distinction.
While the weight of the interpretivist counter attack was a major component of the demise of generative semantics, it was not the deciding factor.It is not unfair, in fact to say that generative semantics destroyed itself.Its internal dynamic led it irrevocably to content itself with mere descriptions of grammatical phenomena, instead of attempting explanations of them.
The dynamic that led generative semantics to abandon explanation flowed from its practice of regarding any speaker judgment and any fact about morpheme distribution as a de facto matter for grammatical analysis.Attributing the same theoretical weight to each and every fact about language had disastrous consequences.Since the number of facts is, of course, absolutely overwhelming, simply describing the incredible complexities of language became the all-consuming task, with formal explanations postponed to some future date.To students entering theoretical linguistics in the mid 1970s, who were increasingly trained in the sciences, mathematics, and philosophy, the generative-semantic position on theory construction and formulization was anathema.It is hardly surprising that they found little of interest in this model.
At the same time that interpretivists were pointing out the syntactic limitations of generative semantics, that framework was co-opted from the opposite direction by sociolinguistics.Sociolinguists looked with amazement at the generative-semantic program of attempting to treat societal phenomena in a framework originally designed to handle such sentence level properties as morpheme order and vowel alternations.They found no difficulty in convincing those generative semanticists most committed to studying language in its social context to drop whatever lingering pretence they still might have of doing a grammatical analysis, and to approach the subject matter instead from the traditional perspective of the social sciences.
While generative semantics now no longer is regarded as a viable model of grammar, there are innumerable way in which it has left its mark on its successors.Most importantly, its view that sentences must at one level have a representations in a formalism isomorphic to that of symbolic logic is now widely accepted by interpretivists, and in particular by Chomsky.It was generative semanticists who first undertook an intensive investigation of syntactic phenomena which defied formalization by means of transformational rules as they were then understood, and led to the plethora of mechanisms such as indexing devices, traces, and filters, which are now part of the interpretivists' theoretical store.Even the idea of lexical decomposition, for which generative semanticists were much scorned has turned up in the semantic theories of several interpretivists.

Interpretive vs. Generative Semantics
The label interpretive semantics describes any approach to generative grammar that assumes that rules of semantic interpretation apply to already generated syntactic structures.It was coined to contrast with generative semantics, which posits that semantic structures are directly generated, and then undergo a transformational mapping to surface structure.Confusingly, however, while 'generative semantics' is the name of a particular framework for grammatical analysis, 'interpretive semantics' is only the name for an approach to semantic rules within a set of historically related frameworks.Thus there has never been a comprehensive theoretical model of interpretive semantics as there has been of generative semantics.
After the collapse of generative semantics in the late 1970s, virtually all generative grammarians adopted the interpretive-semantic assumption that rules of interpretation apply to syntactic structures.Since the term no longer singles out one of a variety of distinct trends within the field, it has fallen into disuse.
Followers of interpretive semantics in 1970s were commonly referred to simply as interpretivists as well as by the more cumbersome interpretive semanticists.A terminological shortening has been applied to the name for the approach itself: any theory that posited rules of semantic interpretation applying to syntactic structures is typically called an interpretive theory.
The earliest generative treatment of semantic, Katz and Fodor's 1963 paper, 'The structure of a semantic theory', was an interpretive one.The goals they set for such a theory were to underlie all subsequent interpretive approaches to semantics and, indeed, have characterized the majority position of generative grammarians in general with respect to meaning.Most importantly, Katz and Fodor drew a sharp line between those aspects sentence interpretation deriving from linguistic knowledge and those deriving from beliefs about the world.That is, they asserted the theoretical distinction between semantics and pragmatics.
Katz and Fodor motivated this dichotomy by pointing to sentences such as Our store sells alligator shoes.As they pointed out, in actual usage, these sentences are not taken ambiguously -the former is typically interpreted as '.. shoes for horses', the latter as ' … shoes from alligator skin'.However, they argued that it is not horses, but not for alligators, and that shoes are made out of alligator skin, but not often out of horse hide (and if they are, we call them 'leather shoes').Semantic theory, then, would characterize both sentences as ambiguous -the only alternative, as they saw it would be for such a theory to incorporate all of human culture and experience.Katz and Fodor thus set the tone for subsequent work in interpretive semantics by assuming that the semantic component of the grammar has responsibility for accounting for the full range of possible interpretations of any sentence, regardless of how world knowledge might limit the number of interpretations actually assigned to an utterance by participants in a discourse.Katz and Fodor also set a lower bound for their interpretive theory: namely, to describe and explain speakers' ability to (1) determine the number and content of the readings of a sentence; (2) detect semantic anomalies; (3) decide on paraphrase relations between sentences; and (4) more vaguely, mark 'every other semantic property that plays a role-in this ability ' (1963, p. 176).
The Katz-Fodor interpretive theory contains two components: the dictionary, later called the lexicon, and the projection rules.The former contains, for each lexical item, a characterization of the role it plays in semantic interpretation.The latter determines how the structured combinations of lexical items assign a meaning to the sentence as a whole.
The dictionary entry for each item consists of a grammatical portion indicating the syntactic category to which it belongs and a semantic portion containing semantic markers, distinguishers, and selectional restrictions.The semantic markers and distinguishers each represent some aspect of the meaning of the item, roughly corresponding to`systematic and incidental aspects, respectively.For example, the entry for bachelor contains markers such as (Human), (Male), (Young), and distinguishers such as [who has never married] and [who has the first or lowest academic degree].Thus a Katz-Fodor lexical entry very much resembles the product of a componential analysis.
The first step in the interpretation of a sentence is the plugging of the lexical items from the dictionary into the syntactically generated phrase-marker.After insertion, projection rules apply upwards from the bottom of the tree, amalgamating the readings of adjacent nodes to specify the reading of the node which immediately dominates them.
Since any lexical item might have more than one reading, if the projection rules were to apply in an unconstrained fashion, the number of readings of a node would simply be the product of the number of readings of those nodes which it dominates.However, the selectional restrictions forming part of the dictionary entry for each lexical item serve to limit the amalgamatory possibilities.For example, the entry for the verb hit in the Katz-Fodor framework contains a selectional restriction limiting its occurrence to objects with the marker (Physical Object).The sentence The man hits the colorful ball would thus be interpreted as meaning '… strikes the brightly colored round object', but not as having the anomalous reading '… strikes the gala dance', since dance does not contain the marker (Physical Object).
In the years following the appearance of Katz and Fodor's work, the attention of interpretivists turned from the question of the question of the character of the semantic rules to that of the syntactic level most relevant to their application.
An attractive solution to this problem was put forward in Katz and Postal's book An Integrated Theory of Linguistic Descriptions (1964), Katz and Postal concluded that all information necessary for the application of the projection rules is present in the deep structure of the sentence, or, alternatively stated, that transformational rules do not affect meaning.This conclusion became known simply as the Katz-Postal Hypothesis.
The Katz-Postal Hypothesis received support on several grounds.First, rules such as Passive distort the underlying grammatical relations of the sentence relations that quite plausibly affect its semantic interpretation.Hence, it seemed logical that the projection rules should apply to a level of structure that exists before the application of such rules, i.e., they should apply to deep structure.Second, it was typically the case that discontinuities were created by transformational rules (look … up, have … en, etc.) and never the case that a discontinuous underlying construction became continuous by the application of a transformation.Naturally, then, it made sense to interpret such constructions at an underlying level where their semantic unity is reflected by syntactic continuity.Finally, while there were many motivated examples of transformations which deleted elements contributing to the meaning of the sentence -the transformations forming imperatives and comparatives, for example -none had been proposed which inserted such elements.The rule which Chomsky (1957) had proposed to insert meaningless supportive do was typical in this respect.Again, this fact pointed to a deep structure interpretation.
The hypothesis that deep structure is the sole input to the semantic rules dominated interpretive semantics for the next five years, and was incorporated as an underlying principle of its offshoot, generative semantics.Yet there were lingering doubts throughout this period that transformational rules were without semantic effect.Chomsky expressed these doubts in a footnote in Aspects of the Theory of Syntax (1965, p.224), where he reiterated the feeling that he had expressed in Syntactic Structures (1957) that everyone in the room knows at least two languages and At least two languages are known by everyone in the room differ in meaning.Yet he considered that both interpretations might be 'latent' in each sentence.A couple of years later he gave his doubts even stronger voice, though he neither gave specific examples nor made specific proposals.
The controversy surrounding generative semantics stemmed in part from the competition between two fundamentally different approaches to semantics within transformational generative syntax.The first semantic theories designed to be compatible with transformational syntax were interpretive.Syntactic rules enumerated a set of well-formed sentences paired with syntactic structures, each of which was assigned an interpretation by the rules of a separate semantic theory.This left syntax relatively (though by no means entirely) "autonomous" with respect to semantics, and was the approach preferred by Chomsky.In contrast, generative semanticists argued that interpretations were generated directly by the grammar as deep structures, and were subsequently transformed into recognizable sentences by transformations.This approach necessitated more complex deep structures than those proposed by Chomsky, and more complex transformations as a consequence.Despite this additional complexity, the approach was appealing in several respects.First, it offered a powerful mechanism for explaining synonymity.In his initial work in generative syntax, Chomsky motivates transformations using active / passive pairs such as "I hit John" and "John was hit by me", which despite their identical meaning have quite different surface forms.Generative semanticists wanted to account for all cases of synonimity in a similar fashion -an impressively ambitious goal before the advent of more sophisticated interpretive theories in the 1970s.Second, the theory had a pleasingly intuitive structure: the form of a sentence was quite literally derived from its meaning via transformations.To some, interpretive semantics seemed rather "clunky" and ad hoc in comparison.This was specially so before the development of trace theory.

Conclusion
Theory of transformational grammar developed from the late 1960s to the mid-1970s.he original proposal was that a base component of a grammar should directly generate semantic representations of sentences, which would be converted to surface structures with no intervening level of deep structure.This was associated in particular with the view that lexical items were units only at the surface level.But in the 1970s it became clear that the proposed semantic representations could not be assigned by rules of grammar independent of the knowledge, beliefs, etc. of individual speakers.
Generative semantics includes semantic and pragmatic information in a linguistic description.According to generative semantics, interpretation is independent of syntactic structure.That is, changing the structure does not influence the meaning.In a generative semantics account of a language, all meaning is present in a deep structure which is sometimes called logical structure in order to distinguish it from the syntactic structures of interpretive semantics.
Interpretive semantics holds that semantic interpretation depends on syntactic structure.That is, by changing the arrangement of the words in a sentence, the meaning of the sentence changes.Interpretive semantics has three versions: Standard Theory which states that meaning resides in the seep structure.That is transformations would not change the meaning.Extended Standard Theory states that meaning resides in both deep and surface structures.So, transformations would change the meaning.Revised Extended Standard Theory states that meaning resides at the level of surface structure.According to the last version, deletion and movement transformations leave traces of constituents, thereby permitting all semantic interpretation to occur at the surface structure.This version is also called Trace Theory.