 Hello. In the e-lecture Generative Grammar we saw that any theory capable of producing well-formed sentences and rejecting ill-formed sentences is referred to as Generative Grammar. And a central component of a Generative Grammar is the Fray Structure Component. In this e-lecture we will first show that there are several problems for simple Fray Structure Grammars. And we will then take these arguments to suggest two of the most popular variants of Generative Theories of Language, Transformational Grammar and Unification Grammar. These variants, which will be discussed in detail in a follow-up e-lecture, essentially have the same goals and share some common ground. Any Generative Theory of Grammar wants to provide a structural description of all and only, that is, the grammatical sentences of a language. The simplest grammar would simply list all the sentences of a language. Such a model, however, could not handle any novel sentences. Thus we need more complex grammars that involve at least two components. A Fray Structure Component, that is a rule component and a lexicon that contains information about the words of a language. Such a model is referred to as a Fray Structure Grammar. Within more complex models of grammar the PSG, the Fray Structure Grammar, functions as the basic component or base. Thus a Fray Structure Grammar of whatever type constitutes the core of any Generative Grammar. Let us look at its component in more detail. The Fray Structure Component of a Fray Structure Grammar consists of a set of symbols, that is, the syntactic categories. Here you find i-bar, adjectival phrase, verb phrase, determiner, noun phrase, and so on and so forth. And then secondly it consists of a system of rules, a system of Fray Structure Rules. With this rule system and here I have chosen a fragment of the so-called x-bar system, which is by no means complete, but I will use it for illustration purposes. With such a rule system we can generate several sentence hierarchies. Let us look at two of them. In this first example all rules have been used and here is the resulting hierarchical representation step by step. Rule number one expands the inflectional phrase IP into two branches, noun phrase and inflectional or i-bar. The noun phrase is expanded into an optional determiner which has been used here, an n-bar. These little deltas by the way in the terminal nodes are placeholders for actual words that will have to be added later. Rule number three expands n-bar into just a noun, the adjectival phrase which is optional has not been used here. Rule number four expands i-bar into the inflectional component and verb phrase. The verb phrase is expanded into verb-bar and verb-bar is finally expanded into verb, the optional noun phrase which could be part of a verb phrase is not used here. Alternatively you can also present such a tree by means of a lisp. I believe it or not there are several people who prefer such a list especially programmers who find that this representation is more explanatory than a tree that I have illustrated over here. A reasonable sentence using this structure could be something like the child passed sleep that is the child slept. Let's look at another example. In this example again all rules have been used but here rule number two has been used without the determiner option and rule number three has been applied twice in the subject noun phrase without the optional adjectival phrase and in the object noun phrase with the adjectival phrase and here is a result a suitable sentence could be John likes cold milk. By the way the use of phrase structure rules has a long tradition with various milestones. If you're interested consult my e-lecture phrase structure one for details. Let us now turn our attention to the second component of a phrase structure grammar the lexicon. Now the lexicon contains the information about the words of a language in its simplest form it pairs lexemes with grammatical category a set of determiners a set of adjectives that occur in an adjectival phrase nouns and verbs. Let us illustrate how then lexical insertion works. Here is a sentence structure that can be the result of the application of some of our phrase structure rules by means of a simple lexical insertion principle we can now match word classes with the terminal nodes in the tree. For example we could have a sentence such as we could have a sentence such as this old boy slept a wonderful sentence but as an alternative we could also generate sentences that sound extremely odd some would match the determiner insertion principle strong house shouted and you will possibly agree that such a sentence is ungrammatical. So such strange cases have to be avoided and any promising theory of grammar needs a more sophisticated lexicon and more elaborate lexical insertion principles that do more than just match word classes with terminal nodes. But even though there is agreement about the necessity of these two components that is phrase structure rule component and lexicon there are considerable differences in the precise formulation of the rules and constraints involved. The following problems for example cannot be handled by means of simple phrase structure grammars. There are sentences where intuitively adjacent constituents are separated so x2 really belongs to x1 but somehow sometimes it can be represented as a sort of final element in which case branches would have to cross not a very nice situation. And then there are sentences where the internal relations between the constituents cannot be shown. So how are these sentences related John ate a cake versus what did John eat they're related very much one is a declarative sentence one is an interrogative sentence but how can we show these relationships. Well and then there are sentences whose logical structure is not really clear. There are sentences where the subject noun phrase relates to a verb that is not its adjacent verb but a verb that is more embedded within syntactic structure. Let us look at these phenomena more closely. The first phenomenon concerns a construction such as John took out Mary. It contains a phrasal verb like take out and take out can be normally represented as a cohesive constituent take the particle out and then a noun phrase Mary quite nice. But what if the components of a phrasal verb are discontinuous in as in John took Mary out. Well here take an out can no longer be represented by means of a simple tree. This would be a result now. However this result clearly violates a condition or it should be an example where we should allow branch crossing but branch crossing is not a very nice solution. So this problem that has traditionally been referred to as the discontinuous constituent argument can only be solved by extending a simple phrase structure grammar or take the following example. Intuitively there is a relationship between a declarative sentence and its interrogative counterpart. John eat meat or John ate meat. Well let's first of all insert the nodes. John passed eat meat. Now we can easily transform this into a question. John ate what. So this would be some sort of echo question. But how can we convert this into sentence into a real question. For example into a W8 question. Well in such a case we would have to move what somewhere to the beginning of the sentence. An enormous problem for a simple phrase structure grammar. Here is another example. In sentences such as John seems to read. John is clearly the grammatical subject of the verb in the main clause. But it's not it's logical one. John does not seem in any way. Logically John is the agent of to read because it's John who reads and not it's not John who seems. But how can we handle this relationship between the grammatical relations that are represented well in John seems to read versus the logical or the thematic relations that are represented in the second version. Well a phrase structure grammar cannot explain these discrepancies between grammatical and logical subject. Another example concerns cases of structural ambiguity. In sentences such as the chickens are ready to eat. A simple phrase structure grammar would have to show the differences in meaning via two syntactic trees. Which I show here as two different embedded clauses. In the first case where the chicken eat themselves the embedded clause exhibits the chickens as the subject of eat. The chickens are ready for the chickens to eat something. And in the second interpretations the chickens are ready to be eaten. Well here the someone eats the chickens and the relationship in the embedded clause clearly shows that now pro an external element eats the chickens and here the chickens are the object of the embedded clause. Well a phrase structure grammar cannot handle this phenomenon suitably unless we expand it. So what can we do in order to solve these and further problems. The solution involves the following alternatives. For example we could expand the phrase structure component. Our suggestion number one. An alternative could be that we expand the lexicon our alternative number two. Or a third alternative would be that we introduce an additional rule system here marked and labeled as number three. In fact variants one and two are favored by the class of unification grammar here abbreviated as UG unification grammar. And the introduction of an additional rule system is the solution that is represented in transformational grammar the standard abbreviation TG. These are the two alternatives which we will discuss in detail in additional e-lectures. Now let's briefly look at the main suggestions. In 1957 Noam Chomsky suggested the use of an additional rule system transformations as a possible solution to the phenomena that could not be handled by a simple phrase structure grammar. A transformation is defined as a rule which requires a sentential input. Now here's now the input. It's not only an output of the base but it's also the input to a new component. And the input is not just one symbol but a sentence. And then it generates a sentential output. So here is now the new output of transformational grammar. Over the years the system of transformations which constitutes the central element in transformational grammar has undergone numerous revisions. The central idea however has always been the same. After an initial string has been generated it undergoes some change due to the application of additional rules. A widespread class of linguistic formalisms are those that are subsumed under the term unification grammars. One essential ingredient of these formalisms is the complex formal description of grammatical units that is words, phrases, sentences by means of features or feature value pairs. They share a highly formalized uniform operation for the merging and checking of grammatical information which is commonly referred to as unification. They do not use any kind of transformational rule. Rather they differ in the organization of their components. Some unification grammars strengthen the phrase structure component, others emphasize the lexicon. Well this e-lecture is introductory in character. Its main goal is to set out some general principles that underlie the most well-known grammatical formalisms and to provide arguments why simple phrase structure grammars cannot handle all sentences. With additional components or the expansion of existing components however we have at least two options to solve these problems. The resulting models transformational grammar and unification grammar will be discussed in additional e-lectures. So see you there.