 I'm very happy to present our speaker today, Professor Andrew Spencer, from the University of Essex. He's a very unlucky person because he was supposed to be with us last term in December and it was a day of strike. And then we moved his talk for today and then there was a strike again, but we couldn't postpone it any longer, I'm afraid. So I'm happy to have him today here and well just for a brief introduction I can say that he's a person who is interested in many interesting things like theoretical phonology, morphology, morphosyntax and has worked on many topics related to all this. I have here a list of his published research topics which include, but are not limited to, for example, non-incorporation in languages like Chukchi, which is in Siberia actually, Clitics in Slavic languages. I know some students are interested in Clitics, but that's a good person to talk to about Clitics. And he also has a book on Clitics which was published pretty recently about a large thing. Well another recent area of research is, well that's probably not very recent, but anyway the notion of periphrastic phonology and periphrastic constructions, again I know some students are interested in this kind of stuff, so another thing to note, the notion of morphological case that something Andrew has published a lot on, the notion of stem in morphology, verb prefixes and many other interesting things. This is just a brief list of topics on which he has worked. His latest book which came out last year is called Lexical Relatedness. It was published in Oxford University Press and it's a very impressive and interesting reading on the issue of how words are related and the mixed representations of lexical categories and things like that. So that's something he is going to talk about to us today. Well the topic of the presentation is exactly this, how words are related. Thank you and I'm delighted to have the chance to come back here to Saas. When I worked in London many, many years ago, I spent a lot of time here at Saas with colleagues. In those days I was a phonologist, so I have lots of happy memories of many fruitful interactions with linguists here. Since then, as Irina said, I've morphed into a morphologist I guess. And one aspect of morphology is how morphology relates one word to another. And of course that means things like inflection and derivation. And inflection and derivation is one of the things that all linguists know about even if they're not morphologists. But nobody really has much idea as to what inflection and derivation are and how they differ from each other. And for some people that doesn't matter and for other people it matters quite a bit. And I'm one of the people for whom it matters quite a lot because I work within a framework of morphology where you assume that there are words that are inflected, that's one part of morphology, and then you also assume that there are ways of creating new words by derivation on morphology and they're different things. But it's extremely difficult to figure out how they're different to distinguish inflection from derivation in the most general case. We can easily point to simple cases which are obviously inflection and other simple cases which are obviously derivation. But there's an awful lot of stuff in between. And a lot of the stuff in between these intermediate cases, a lot of those cases are hardly discussed in the literature which was the main reason why I wrote about 480 pages of the book about this stuff. And even then I only covered a small part of the topic. So what I'm going to do is I'm going to be talking about this notion of lexical relatedness. I'll tell you what PRI stands for later on. And there are going to be two aspects to this. One of the themes that's going to come through this talk, an awful lot, is whether or not when you compare two words, whether or not they're the same in their lexical meaning or whether they're different in their lexical meaning. And that turns out to be quite an important distinction. And so you'll see there are two sort of subheadings here. I'll be working from these slides. I don't have a handout, but you don't really need a handout. If you did have one it would just be a selection of the slides. And in case you're interested in the content and some of the examples, then we'll figure out ways of making the slides available, either on my own website or on one of the science and linguistics department websites. So let's start out with some very, very simple ideas. And the first simple idea is the idea of a dictionary. And I'm going to assume that a dictionary is a kind of database. I mean, what a dictionary is. It's a list of words and their properties. And I'm going to assume these four different properties or attributes for a typical word. And that's about as good a lexical entry for the word cat as you will find. The bit in scare quotes might be improved upon, but not very much. One thing that I'm going to have to use this as my pointer, I forgot to bring my wonderful laser pointer, so I'll use this as a pointer. This thing will require a bit of explanation, this lexemic index. I'm assuming it's lexeme number 59, an arbitrary number, or lexeme labeled cat. And you can think of that as just like a key field in a database. It's just a unique index that you can distinguish cat and dog. So what am I going to do with this talk? I'm going to, based on that idea of a dictionary, lexical entry, I'm going to propose a typology of the ways in which words can be related. Because it turns out there are quite a lot of different ways. And what I'm going to do is assume this principle of representational independence, that's the PRI, and all I say is that if you've got two words, you want to compare them, they might be completely unrelated, like cat and disintegrate, or they might be related in some way. And how they're related, well, what I'm going to assume is that you can establish relationships between one word and another by relating any of those four attributes to each other, independently of each other. So that means they can be related in terms of form, but nothing else. They can be related in terms of form and meaning, but not syntax, and so on and so forth. And we'll see some little tables and diagrams, which will make that be clearer. And that gives rise to an awful lot of possibilities, half of which I will ignore because what I'm going to do is I'm going to concentrate today on those relationships between words which in some sense are just forms of a single lexical entry, a single lexine. So that's become a bit clearer when we get to the examples. Now, before I do that, I'll return to this inflection derivation thing again because it's important for us to get an anchor, the discussion to some extent. And so what's an inflection? Think of inflection in terms of canonical types of relatedness. So canonical inflection on morphology. This is where you've got a word form which realises some inflectional properties of that lexine. So you've got a good example to think of here is an English verb agreeing in the present tense with its subject. So the girl runs ends in S, third person singular, as opposed to the girls run doesn't end in S. That's contextual inflection. It's contextual because it depends on the context, the syntactic context. The S at the end of runs doesn't mean anything. It doesn't add any meaning to the verb run at all. It's not like overrun or rerun or any of these other bits of morphology which add some meaning. Runs is also different from ran, the past tense. You think of ran as adding the meaning of past tense, although when we add the meaning of past tense, it's not the same as adding the meaning of rerun or overrun or something of that sort. It's an inflectional meaning. And that's what's sometimes called inherent inflection. So that's not quite as straightforward as contextual inflection. But those are the two sorts of things which we call inflection. And then there's canonical derivation and that's where you just take a word and you add some extra meaning to it. Drive, person who drives, drive er. So you add er to drive and you get another. And driver remember will almost certainly end up as a separate item in the dictionary. Separate lexeme, separate lexical entry. Whereas runs doesn't have its own lexical entry. It's just a form of that lexeme. So this is all very straightforward. Trouble is not all types of relatedness are that straightforward. There's an awful lot of cases which are intermediate when we look at this across languages. And so we're going to have a look at some of these intermediate cases. So here's our four attributes. Okay, I've got a sophisticated semantic representation this time. So this is a semantic representation. It's a one place predicate with that being a variable possibly bound by a lambda expression and it's an ontological category thing. So we don't know what it means. I mean it means cat, whatever cat means, furry quadruped that makes a meowing noise. And this lexemic index, I mentioned this and this is a unique number in effect, a unique integer which allows us to distinguish one lexeme from another and anybody who's ever constructed a database, whatever it might be, a shopping list or anything, you know that you have to, any database assigns a number to every entry so you can keep track of them. So that's the basic function of a lexemic index but we'll find that actually it's quite useful to have this lexemic index. It tells you whether or not you believe two words are separate dictionary entries or whether you think of them as one entry with two sets of properties. So if it's an inflected word like run runs, clearly that's just one lexeme, one lexemic index. If it's drive and then driver, two separate lexemes, they've got two separate lexemic indexes. What about running? As in running is very good for you. It's a noun. Is that a separate lexeme or is it a part of the run verb lexeme? Given it's a noun, maybe you want to say it's a different lexeme. On the other hand, maybe you want to say it is part of the run verb lexeme. Well, once you've made that decision you record it by saying, well, if I think it's two separate lexemes, I'll give running a separate lexemic index. Whereas if I think it's a form of the run verb lexeme, then I'll give it the same lexemic index as the third one. So that's the basic housekeeping function. I won't talk very much about what semantics is. I mean, that's basically it for those that are interested in these things. So I assume that you can have things, events, properties and relations and they tend to be nouns, verbs, adjectives and prepositions. But I would say a great deal about semantics at all. So that won't actually be so important to us. And syntax as well. Well, syntactic class, noun verb. The syntax also has to specify any collocations, argument structure and all that sort of thing, whether that's transitive or not and so on. In other words, there are sorts of things you expect to see in a very good dictionary. Form, I need to be a bit more specific about, I suppose, though, again, I don't need to be too specific for the purposes of this talk. But form basically means, obviously, the phonology of the basic stem. So run is rough up, no. But for me, this form attribute also includes all of the regular inflected forms, as well as irregular inflected forms. So I've got here the inflectional paradigm. So running, as in the child is running, that will be a regular inflected form. Ran is an irregular form, but they're all part of this form attribute and they're defined by the inflectional morphology. That's what I'm assuming. Not all models of grammar will make those assumptions by any means. Some models of grammar will more or less ignore nearly all of this stuff or claim that it doesn't exist and claim it's all syntax instead. So I'm making certain assumptions here, lexicalist assumptions. But at the very least, they're reasonable descriptive assumptions, even if you don't want to code them into your theory of grammar. And here's what I think about relatedness. So as I mentioned, what we do is we compare, on a pair-wise basis, compare these four attributes and see whether they're the same or different, basically, and whether one subsumes the other, whether one contains the form or the semantics of the other word. So, for example, we're seeing in derivation, if you've got drive and then you derive the word driver, then what happens there is that you're adding a semantic predicate. You're taking the verb drive and adding the idea of person who does that verb, person who is the added semantic predicate then. And that's what all derivational morphology looks like, actually, according to some. Now, one thing I will mention, and that is that, for me, lexical relatedness can be a bit sort of wild. And, well, I say for me, in many languages, lexical relatedness is a bit wild. And in some cases, lexical relatedness is just defined over forms. And there's nothing else that's related. So forms are maybe a bit of a syntax. So understand is a sort of emblematic example of this. Understand is an interesting verb in English. On the one hand, it clearly consists of the units under as a prefix followed by the verb root stand. It's very clearly that. On the other hand, the verb stand as a proper verb doesn't appear in the verb understand. And the prefix or the preposition or whatever under, that doesn't appear either, because the meaning of stand and the meaning of under is not contained in the meaning of understand. Why do I say that understand contains under and stand? Well, only from a purely formal, morphological point of view. So the reason I say that is that stand, the standing understand, has exactly the same irregular past tense, past participle as the real verb stand. And it's not the only verb in the language like this. There's a number of other verbs that are like this, including with stand and with hold, mistake, undertake and so on. And so you can't really make any sense of that unless you assume that the verbal bit is the form of a verb which exists elsewhere, but there's no connection with meaning. Now, in English, that's a bit peripheral. There aren't that many verbs that are like this in English. In other languages, it's not so peripheral. So I would reckon that between a third and nearly a half of the vocabulary of languages like German or Russian is actually like understand. Non-compositional semantics, but very clearly prefix plus verb. Ask me later, for examples, if you're really, really interested. Okay, so how are we going to measure like school relatedness or determine whether two arbitrarily chosen words are related to each other? Well, the crudest way of doing this, the very crudest way of doing this is to say, first of all, can we say that two of these attributes are identical to each other? They have the same form or the same meaning or whatever. And if that's the case, then you'd want to say that the words are related. And so if you've got a canonical inflection, then clearly run and runs are not related to each other as forms of a word. If you've got canonical derivation on the other hand, everything changes. So there we have to adopt a slightly different approach to looking at relatedness. We have to say, well, okay, if you've got drive and driver, why do we think those are related? And the answer is because we've got this overlap. The form of drive is contained within the form of driver. We're lucky there. And the meaning of driver contains the meaning of drive and so there's this overlap. So there the words are related because there's an overlap or a subsumption containment relation between some or all of the properties. On the other hand, drive is a verb and driver is a noun and they're completely different from each other. So on the syntax attributes, they're completely unrelated. They're as unrelated as cat and disintegrate pretty well. Now, I mentioned there's all these different types that are between inflection and derivation. And I mentioned that I'm going to have a look at some of these, particularly the ones that don't change the lexemic index. And so let's have a look at some of these. And I'm going to, as I mentioned, assume this principle of relational or representational independence. So what that means is I'm going to try to find examples of all the logically possible different ways in which you can have words which have identical form, identical syntax or identical semantics or whatever, whilst remaining the same lexeme or parts of the same lexeme. There's an awful lot of these in principle. Two to the power of four, at least. And I would argue that you can find examples of almost all these sorts of relationship. In some case, it's a bit trivial like synonyms where you've got totally different forms, but exactly the same meaning. And that's not done by morphology. But a lot of these things are done by morphology in certain languages. And so let's have a look at this within lexeme relatedness. And there are these two sorts that I mentioned, as you saw from the table of contents at the very beginning. So there are those which don't involve the change in meaning and those which do involve the change in meaning. So let's have a look at the ones which don't involve any change in meaning. And there they are. One of them is really trivial. I know you have to mention it, but because it's there, it's the identity relation. So every lexeme is identical to itself. It's a form of relatedness. We have to mention it because if we don't, then mathematicians get annoyed. Then there's contextual inflection or inflection generally, and that's where the form changes. So in the canonical case, when you inflect a word for present tense or past tense or singular or plural, whatever it might be, when you inflect a word you change the form in some way. One form is different from another form. That's the whole point of inflection. I mean you can have cases where there isn't any change in form, but that's not the standard. So with inflection, what happens is that you don't change the syntactic category. Run runs as a verb or a noun. And you don't change the lexical content. So you just change the form. And then there's three and four, an eminent transposition and an ordinary transposition. And you're wondering what those are, I expect. Unless you've had a chance to read the book that came out in September, you won't have the foggiest idea what these are. But don't worry, in a few slides time you will. You will know. So the first thing we need to know about is transpositions. So transpositions, these are really interesting phenomena because transpositions are the things which are absolutely in between inflection and derivation. These are the things which cause most anxiety to morphologists who want to distinguish inflection and derivation. And I've already, so to speak, let the cat out of the bag by mentioning this example of running. As you think of running as an adjective, it doesn't work so well in English, but it works very, very well in lots of other languages. You can translate this into Russian, for example, or German or Latin. The running child or the walking wounded lexicalized phrase. So if you've got a de-verbal participle, it's an adjective. And you've changed a verb into an adjective, and that means it fulfills one of the standard criteria for derivation on morphology. It's like a verb being turned into a noun or indeed a verb being turned into an adjective, like read and readable. So that makes it look like derivation. On the other hand, what meaning change have we got? What a participle, present participle, for example, you haven't changed the meaning. The singing children is the children who are singing, or the children singing the song. I mean, you can even have a direct object with these participle if you want, if you stick them after the noun. So the children singing the song, that's still a verb, really, except that in a lot of languages, it would look like an adjective that would be in number gender in case with the noun children. So is it inflection or is it derivation? Well, it's either both or neither. It's somewhere in the middle. So that's one example of a transposition. And what we're seeing here is a typical scenario. We change the category of the word. We change a noun into an adjective, or a verb into a noun, or a verb into an adjective. And we change it in terms of its syntax and its morphology. So a participle goes in the places where adjectives go, and it agrees in number gender case with the noun it modifies. Just like any other adjective, it's just that it's a verb, and it takes a direct object, and that sort of thing, and it's modified by adverbs. So on the one hand, it changes the category. On the other hand, it doesn't change its meaning. It doesn't acquire a new additional bit of meaning. So you can't say with any confidence, ah, this is a new lexical entry. And sure enough, when you look at the dictionary of Russian, say, you won't find whole lists of participles in addition to the verbs. The participle is considered a part of the verb. Unless, of course, it shifts its meaning and acquires some extra meaning, in which case that's a different matter. But an ordinary participle doesn't involve any additional meaning. So it's not a new lexeme. And yet, you've taken a verb and made it into an adjective. So what's going on here? And you can do this. You can take a verb or an adjective and transpose it to a noun. You can take a noun or an adjective and transpose that into a verb. And you can take a verb or a noun and transpose it to an adjective. And I'm just going to show you some examples of this. Verb to noun, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, um, of something being true. So, if you think about this really hard, you find that there are subtle nuances that distinguish these meanings, but I argue and I take whole chapter to argue that these are just constructional. It's not the same as drive-driver. It's different from creating a new lexicon. Then we have transpositions to verbs. Now, there are lots of languages where you can say Mary is tall or Mary is a doctor. By taking the adjective tall or the noun doctor and inflecting it like a verb. If you don't believe me, there's a whole grammar being written of a language which does this in space, of which Irina has just been writing. So, there's quite a few languages actually, quite a lot of languages, especially in the case of adjectives, where if you want, instead of having a copular verb, you just inflect the adjective as though it was a noun. So, as though it was a verb and so that's transposing it to a verb. It's not really a verb, it's still an adjective or it's still a noun, but it's been used in the syntactic context where you'd want it to be a verb. And then we've got these verb to adjective things which we've talked about already. One really interesting case and this is really interesting to the extent that Irina and I have written about it quite a bit and we're just finishing a book on this topic. And this is where we take a noun and create an adjective without adding any extra meaning. And there are lots of languages to do this. English sort of does it. English sort of allows you to take, it only does it with nouns that are basically French or Latin or Greek. But you can do it in English as well. So, you're all linguists, so you've all come across the notion of a prepositional phrase, haven't you? You know what a prepositional phrase is. What's the difference between a prepositional phrase and a prepositional phrase? Yeah, you're right, no difference at all. Yes. It's just stylistic variant. It just happens to work. It doesn't necessarily work with other phrases, but so the word prepositional is an adjective and it means the same as preposition. So the AL doesn't add any meaning at all. What it does, it allows you to treat preposition as an adjective. And if we had, you know, agreement in gender and this sort of thing, prepositional would agree in gender and number with the word phrase, just as it would in French or Russian or whatever. Now, English doesn't need to do that because we've got compounding instead, we can say preposition phrase. In languages where you can't do that, then you have to turn it into an adjective. But you're not adding any meaning. You've got a noun, you want that noun to modify another noun. You don't have compounding, so you have to turn the first noun into an adjective. And that's a relational adjective. And here's an example from this, this language, which is a semi-edic language, actually related to Nenets, which is a language that Shireen has been working on recently. Here's a word oak, which means leader. And whoops, hold on a second, where are we? Yes. This is a language lots of people will follow you. Nouns affect the number and case and possessor agreement. So you know that some languages do this. Instead of saying, my house, you take the word house and inflect it for first person singular possessor, or third person dual possessor, or whatever it might be. And you'll cook us this. And you can create relational adjectives by adding l to the end of the noun. Extremely productively. So as long as one can tell, it's just like almost like inflection. Interestingly, so you can get a word, which means pertaining to a house or pertaining to a canoe. So for example, canoe, canoe bow or canoe or that is the ore of a canoe. And instead of using a compound, what you would do is we have l to word canoe and you get this, you know, or which is related in some sense to a canoe. But you the wonderful thing that's your cup is that you can do that you can take the good canoe and and then inflect that for possessor agreement. So you can say the or which is related to my canoe. And all from my canoe world. And that's not so common. It's not so common. But that shows this is the crucial conclusion we draw. If you can add this suffix to a possessed form of a noun, that shows that these adjectives really are inflected forms of the noun lexeme. They're part of the noun, inflection or paradigm. It just it's just that they happen to be adjectives with all the more for syntactic properties of adjectives. So this is one of our intermediate categories again. So this is this is where it looks like a transposition. But it's a transposition which behaves more like inflection than derivation. Just in case you didn't believe any of that, here are the examples. And they all end in here. And possessor agreement shows that they're different forms. So this is from the word leader. So this is pertaining to my leader pertaining to your leader and so on. You probably don't want to write all those down. But if you do, you can find them in in the book. Okay. And that's it. So that's another example of a transposition of straightforward, simple examples of a transposition. And then we've got this is a possibility. I'm not, I'm not 100% certain that I can actually prove that there are m inert morphologically inert transpositions. But but the minds of typology predicts that they should exist. I predict that you should be able to have something which is a transposition, which doesn't involve any morphology, where the form is identical, the form stays constant. But it's still a transposition, you still change the syntax without changing the meaning without changing. Now, what on earth would this look like? Well, here's a possibility. Relative clauses in Japanese. And people don't normally this is not standard. This is not a kosher analysis of Japanese. So this doesn't go beyond these four, one, four, five, six walls. But one way of thinking of Japanese is this. When you do a relative clause in Japanese, it's dead easy. You just take your finite clause, such as, you know, reading, reading, or writing letter in the library, whatever. And you take that clause, and you put it in front of the noun. And that's it. You don't, you don't have any relative pronouns. You don't have any relative clause markers of any sort. You just take the clause and use it as a modifier of that noun, just as like we're an adjective or something of that sort. Now, one way of thinking of that is, and this is very different from using a participle. If you use a participle as most languages do, then what happens is that you transpose of an adjective form, and it agrees just like an adjective agrees with the noun, the head noun. But in Japanese, there's no agreement of any sort, at least of all this. So you just take the clause and stick it next to the noun, and it modifies that noun. So one way of thinking of that, so this is what it's supposed to look like, schematically. So one way of looking at that is to think, well, this present tense verb form, on one hand it looks like a present tense verb form, but on the other hand, it is not doing what a present tense verb form should do. It's not, it's not just establishing a predication. It's also modifying this noun as an attributed modifier. So it's a sort of schizophrenic or mixed kind of category. So one way of thinking of this is to say, well, this is a transposition in a normal language, this would be a participle. It's just that there isn't any participle morphology, because there's there isn't any agreement morphology in Japanese. So this is a transposition, which doesn't involve any morphological change, it only involves a syntactic change, you're using it in a position where a verb shouldn't really be. Not on its own at any rate. So that could be an example. One way of analysing that is to say that Japanese permits transpositions of verb to adjective, but it doesn't do that morphologically, Tony does it syntactically. And Peter Sells has analysed Japanese morphosyntax in a way which is kind of like this, actually, but without, as it were from the syntactic side of this. Okay. So in the remaining 20 minutes, 20 minutes, good heavens, there might even be time for questions. So in the remaining 20 minutes minus question time, I'm going to get, so that was the easy part of the talk. That was the simple because we are sort of familiar with with transpositions like participles. And it's very good to keep keep the idea of a participle in your head. And if you can also keep the idea of prepositional as a relational adjective in your head, that's also going to be very useful. And so what you're keeping in your head is the idea that you take a verb and you can just turn it into an adjective without adding any extra meaning. So it's still, in a sense, it's still a verb, but it behaves like an adjective. And you can take a noun and turn it into an adjective, but it's still really a noun. And now what we're going to do is we're going to say, well, okay, let's have a look at types of relatedness where we change the meaning. And yet it's still the same lexeme. And here are four ways of doing it. Here are the four logically possible ways of doing it. So they all involve a change in the semantics. None of them involve a change in the lexemic index. They're all the same lexeme. And either we don't change anything at all, or we just change the form but not the syntax, or we just change the syntax but not the form, or we change both. And you don't believe that that can be true, but it is. And the first one is, well, we don't change the form or the syntax. We don't change anything, actually, except the meaning. So what we've got here is a single word with two meanings. All right, okay, well, that's bank, you know, middle and bank as opposed to river bank. That's homophony. But there's some, there's also polysomy where we think the two meanings are related to each other. And this is much more interesting. So systematic polysomy, this is where you systematically have a word which is allowed to have two meanings or two interpretations. So what would be an example of that? Well, one famous set of examples would be container and its contents. So think of a word bottle. The word bottle is a word which denotes a container and systematically in English and many, many other languages, that means that it can denote either the container or the typical contents of that container. How do I know that? Well, of course, I can take, I can pick up a bottle. I can pick up this bottle. Or by the end of this talk, I might, I might drink this entire bottle. Now, the bottle itself will remain, it's just the contents that get drunk. So bottle means either the bottle or the stuff in it. And that's true of, you can invent a new name for a container and this will be true of that container as well. And there is an enormous industry looking at this kind of phenomenon. And in the sort of Western tradition, it's Pustiyovsky's generative lexicon. In the East European tradition, it's the meaning text model and Applesian's work on lexical mathematics. So that's one form of lexical relatedness. And the interesting thing about this is that languages differ as to what sort of systematic polysomy they allow. So in some languages, you might get one sort of, so in one language, for example, causative verbs and incoative verbs, like to dry or to thicken or to widen, they might be systematically polysomy. As in other languages, you might have different words for the causative and the incoative. So that's part of grammar. It's part of knowing the language, knowing what is systematically polysomous. The next thing is inherent inflection and that's, this is like past tenses and plurals and other things. So here are some examples of inherent inflection and if it's inflection, then of course it preserves the meaning. And my, I would argue that these are good examples, by and large, of things which change the meaning. Did you get that all those down? No. There's a lot of, I'll go through a couple of them at random. Actually, are there any that you particularly love me to mention? I'm not going to talk about all of them. Proprietive and privative cases. Do you know what they are? No? They're great, they're wonderful. And again, if you want the full story on this, Irene has written grammars on several languages which has these things. So lots of languages which have a case form which means owning or possessing or having X or N for noun. So if you want to say, you know, a girl with a red dress or the man with a spear, then you say spear with man or red dress, dress with girl. Privative is just the opposite, not having. So that's like less in English. You know, hairless person. Another one, causative, causative alternations. There are lots of languages where you have, where any verb has a causative form. Now is that inflection or derivation? Well, it depends what you talk to. It adds a meaning, arguably. Adds a meaning of cause to read a book. So it looks as if it's meaning bearing. It's not obviously derivation, it's not obviously inflection, it's really intermediate. But if it's completely regular, then you probably want to say that it's the causative form of this verb. Just like you want to say it's the past tense form of this verb or it's the passive form of this verb. So in many languages you would probably want to say that causative is kind of, it's a meaning bearing inflection-like property of that, of any verb. And there's another way in which you can, so that's where you change the form and you keep the same syntax in the broadest sense. I mean, okay, a causative verb has different syntax from non-causative passive verb has different syntax, but they're still verbs. You're not changing a verb into a noun. And there are other ways of changing the form and still keeping the same syntax, same syntactic class, broad syntactic class, and adding some meaning. And this is evaluative morphology. Evaluative morphology means things like diminutives and augmentatives. And there are two sorts. You can have literal ones where it means a little one of these, table, small table. And there are others where the evaluative component is more important, so cute little thing, you know. And very often these things are applied to proper names and kintons. Now if they're applied to a proper name or a kinter, then obviously you're not creating a, obviously, well, sort of obviously you're not creating a new lexeme. You don't really want to say, I mean even, I'm saying it's true of hypercaristics actually as well, you don't really want to say that, I don't know, a name like Andy is different from a name Andrew. There are different forms of a name that one person has. And in languages with a great deal of evaluative morphology, like say Russian, it's clear that you've got a single word which has various forms with different degrees of evaluation, diminutiveness. It's not just nouns, by the way, so in Russian some adjectives can have diminutive forms and now actual some verbs can have diminutive forms. So this evaluative morphology is quite widespread. So that's the second type. Systematic polytomy and then as the way you change the form in you, but it's still a noun or it's still a verb. It's a form of a verb, form of a noun. The next type, I call this the Angostelto noun type. I suppose I could have thought of a more pronounceable name for this, but I like this. It's named after one of my favourite examples. So Angostelto is a German word and I hate doing this with native speakers in the audience, but I can't really get away with it. But this is a noun, which means employee, a person who is employed. And you notice I've got these brackets around the R. And the reason for that is German speakers are very very acutely aware of sexual politics. So nowadays whenever you use a word which denotes a human being you've got to put it in the feminine and the masculine form. And that's why you see, you know, and this is the same thing. Without the R it's a feminine employee and with the R it's a masculine employee. How do I know that? Well because these are forms of the adjective and these are the adjective agreement forms you would get if this was an adjective. And the thing is that this is an adjective. It's actually a participle. It's a transposition as well as being an angostelto noun. It has all the forms of an adjective, but it's used as a noun. And in the syntax it's a noun basically, except that it takes all the endings you expect from an adjective, including the difference between definite and indefinite context where you get different sets of endings depending whether it's an employee or the employee. This happens in other languages, not just German, Russian has it as well. So you just take an adjective and use it, an adjective that can denote a person and use it to mean that person. And it's really quite systematic. And so what's happening here is that you change the syntax but you don't change the form. It's still an adjective. It looks exactly like an adjective, but in the syntax it behaves exactly like a noun. In principle you might get this with other categories. Arguably you get this with these wonderful American Indian languages where you can basically take a clause and you use that as a name of something. So Navajo is full of these things. If you just browse through any decent dictionary of Navajo you'll see hundreds of these sorts of things. My favourite example is a word for university scholarship. The word for university scholarship is actually two words and it translates literally as with it I pay for teaching. That's my university scholarship. Your university scholarship would be with it you pay for teaching. This is a noun meaning scholarship, stipend. Dancers with wolves, if any of you are old enough to remember the film, Kevin Costner. For many years I went after one now. When I first saw the title I thought, oh, dancers with wolves. So this is wolves and there are various dancers that are going to take place. I thought dancers was a noun. Stupid, I should have understood this is Lakota, this is Su. It's a verb. So the Su Indians saw this chap dancing around the campfire at night when there were wolves watching him and they thought he was mad, which presumably was a very salient property. Salient property is what you use to name somebody in these cultures. So what did they call him? They called him He Dancers with Wolves, which is a sentence. But that's not the name of the film because Lakota is a pro drop language, so you don't have a pronoun. You just have the agreement. So dancers is He Dancers and with Wolves. So that's a whole sentence, but it's a noun. In fact, it's somebody's name. And so what you've got there is a verb being turned into a noun, but it's still a verb. It's in fact an entire sentence. Other types aren't so likely, so I won't bother talking about them. And we go to meaningful transpositions. Now, at this point, you should be thinking, hang on a minute for goodness sake. It's bad enough having dancers with you. You have just explained to us what a transposition is. A transposition is where you change the form of the word from verb to an adjective and you change its syntax. So it behaves like an attribute of adjective, although it can also take a direct object and that sort of stuff. But it's sort of external syntax is that of an adjective, agreeing with a noun. And the crucial thing you said, not, you know, 27 minutes ago, the crucial thing about a transposition is that you don't add any extra meaning. That's the crucial thing. So how can you have a meaningful transposition? A transposition which adds semantic content? Well, I don't know. You just can. So this is a canonical transposition, no changing meaning. And this is a meaningful transposition. And here are some examples of them. They're going to come from our friend, Steele Coup. Now, remember, Steele Coup has relational adjectives, ordinary prepositional type relational adjectives. You take a noun, you add lip to it and you get pertaining to leader, pertaining to canoe, pertaining to whatever the noun is. But there are two other types of adjectives, two other types of adjectives, which are very similar. They end in Lyre as well. But they have slightly different suffixes. And one of them means similar to noun. So it's a canoe similar to, you know, a boat which is, which looks like a canoe. So that would be canoe like boat. In fact, it's sort of like the word canoe like in English. So it was no hyphen. And then there's another type of suffix, which means located at. So the sort of inner canoe, whatever. So they are in the canoe, an inner canoe or. Now the crucial thing about these two crucial things, one is there's a change in meaning similar to located at these, you know, meaning. The other thing is that formally speaking morphologically, these things behave exactly like the relational adjectives because they're inflected for possessor agreement. Now, what does that mean? It means that you can say a boat which is similar to my canoe or a boat which is similar to their canoe or, or, I don't know, a table which is located in the house belonging to you to your house. In other words, we've got this meaningful, meaning bearing piece of morphology which creates an adjective from a noun. But it still inflects for possessor agreement. So it's still part of the possessive inflectional system. It's still a noun like seen. It's just that it's changed its form, it's changed its syntax, and it's gotten some extra meaning. That just proves that it's true. You can see all these different, so they all end in here, and these end in shy, which is like, and these end in, well, something quite complicated, but basically they end in something which is similar to a locative case ending. This is an inflectional paradigm. It's just that it's, it's also a transposition. And here are a couple of examples. So here's, this is just summarized seal, so this is clothing made out of bare skin, pure relational adjectives, skin pertaining to a skin clothing. And this is similar to a boat, a big boat. Notice we can modify boat with big, so it's, and this is located in a large forest. So this is just, this just proves I'm not making it up completely. Notice that you can say in a large forest or in a big house or whatever. So that means that the adjective big thinks that forest or house or canoe is still a noun. It thinks it's still a noun, Maxine. Once you get to the end, it suddenly turns into a meaning bearing adjectival word, but it, it's still really a noun, which is why you can still modify it with an adjective. So, so that summarizes our transpositions. And the crucial thing is that you can have some, you can have transpositions, which involve an additional semantic predicate, but they're still transpositions. It's still a form of a single Lexine. And I like that because my very, very crude typology, when I say, is it the same? Is this actually the same or different? And if it's different, is there overlap between one and the other? That basic crude typology predicts that these things should exist. So the fact that they do is, is, is quite nice. But also it shows that, well, actually, what it shows is that you don't, you don't really want to ask questions like, is this bit of morphology inflection or is it derivation? You don't want to ask that question because most of the time with this typology, you get a crazy answer or an incomprehensible answer. The real question you have to ask is not, is it, is this inflection or is this derivation? The way you have to ask is, okay, given these four attributes, form, syntax, semantics, and the lexemic index, which of them get changed compared to, you know, the base form that you start out with or the base form you're comparing it to. And the answer is for, and remember, we're only looking at the ones where the lexemic index is the same. So the answer is, any of them can be changed. This is the principle of representation of independence. Any of them can be changed independently of the other. So the question you should ask is, what exactly has happened? What exactly is the difference? And when you specify that, you don't need to worry whether it's inflection or derivation. Inflection turns out to be at one end, and derivation is at the other end, where the meaning is changed and the lexemic index is changed. But you don't need to ask what the correct name is for all the other intermediate cases, whether it's really inflection or really derivation. You don't, you just need to specify what has actually happened. That's the idea. So this is what I've just been saying. And as I say, I reckon I've managed to identify examples of all of these different types of lexical relatedness. And I'd like to think that that helps us to solve what would otherwise be quite a difficult problem in morphology for people who want to distinguish inflection from derivation. You can do, but there's a lot of other types of lexical relatedness to talk about as well. And rather than worrying about whether transposition is a really derivation or really inflection, for example, you don't do that. You just say what exactly is the relationship between the base word and the derived word you're comparing. Once you've done that, you don't need to answer any other questions. But I will answer any questions that you do. Thank you. Thank you. Looks like this room is not needed, so it can stay here a bit longer, ask questions, and then we can go and have a drink at the Institute of Education, I guess. Are there any questions? Of course, Lewis has a question. She's got two questions. I have enough questions, but I'll ask you one. I missed the beginning sort of a bit, but since you're so convinced that you construct inflection derivation, I then started wondering why would one of them? And then I was taking people to leave inflection derivation. Do they have reasons for believing that inflection derivation exists? Or is it a hangover from the scripted grammar? Or is that something which you might be losing by deconstructing them? I was thinking maybe of the expert syntax, the nexical thing versus structural thing, that sort of thing. And if there's something you would lose, how do you recover it? Yes, it's in a way, this is a topic of another talk. But so I'll try and be as concise as I can. Basically, there are there are two ways of thinking of especially inflection or morphology. And one is the way that you're taught in sort of LG 100 and I'm going to use this. I promised I would. I promised you mean that I would use some iPod and I'm going to. So here's cat and here's the plural. And that's the plural morpheme. And it means plural and you add it to cat and it means plural of cat. And everybody agrees with that. And it's obvious, isn't it? Except that people like me don't. We think that's a complete a completely wrong way of looking at inflection. What we say is that there are two forms as cat and as cats. And this is the result of applying a rule which expresses a feature value plural number. This by default, if it's used in the syntax, then by default, this expresses the value singular number. But at no point do you list this as a separate lexical item with the meaning plural. What is simply a form just like men is a form which happens to be a sublative way of realising this property. So we have this paradigm, very simple paradigm, singular and plural. And to get the plural, there's a rule which sometimes is overridden by some exception. And for the singular, it's default. It's what happens if you don't do any morphology. And that's how we do it. That's how we do it in inflection. And there are no more themes in this model of morphology. That is this is a realisation or approach to morphology. Why do we do this? Because when we look at really complex paradigms, what we find is that there are all sorts of relationships between the different cells in the paradigm that we want to define. And we can only do that simply if we use these sort of feature representation and just say that this is a form which systematically realises this property, possibly amongst other properties. And maybe there are other meanings that it would realise. If you adopt the morpheme approach, then actually there's no problem because there's no distinction between inflection and derivation. You just got two different morphemes. You've got ER morpheme added to drive to give you driver. You've got the Z morpheme added to cat to get cat. And you've got the food morpheme added to cat to get cat food. So all morphology is compounding if you believe in morphemes. And the problem with that is that when you look at complex inflectional systems, it tends not to look very much like compounding. Now, this is very controversial because if you talk to a distributive morphologist, minimalist in tacticians, they will say, oh, yes, it does. It looks exactly like compounding. That's how we do it. That's how distributive morphology and minimalism does inflectional morphology. It's you take morphemes and you put them on the trees and the roots move up the trees and collect these other morphemes. And it's a form of compounding. And so there's this big controversy between the distributive morphologists and those who subscribe to similar models and those who subscribe to the realisation of morphology. Now, that's the background. And if you believe in this realisation pattern model, you won't, well, the problem is that this is only true for the lexium cat. So when you've got derivation on morphology, totally new inflectional paradigm and this sort of thing, so if there's no systematic, foolproof way of distinguishing inflection from derivation, then we have problems, those of us who believe in this model of inflection, because we really do need to distinguish inflection from derivation. And so what my model is trying to do is to say, well, yes, you can do that, actually. You can have your cake and eat it, but only if you make certain, slightly radical assumption about derivation or morphology. For that, you have to read the book. Because that is another talk. I'm definitely never talk. Maybe Lutz has another question. Maybe, yes. I'll ask one other one. I hear that. And that is again, I missed the beginning point. But if the lexical index, it seems like you assume that the amounts and graphs and adjectives and things, but you could play the same, you are playing derivation inflection with the lexical categories and think of, you know, there's the old category switch, the English, that sort of thing. Yeah, yeah. And John Anderson's case from a, you deconstruct the lexical categories and say, well, there's primitive features and differential features, and they have different combinatorial properties. And that gives you the noun of the one with the graph on the other, the inflected, fully inflected tense of the other, and then the definite or the name of the other, and the in-between. And that would, in a sense, maybe we are in target analysis to these in-between cases, yeah. Well, it's not quite clear, is it the noun, is it the graph, or is it not? Because for these guys, it's a lot easier. Because they say, well, it's three quarter p with one quarter n or r, or whatever it is. Yeah. And then you can get there. Is that compatible? Yeah. So, I mean, I have a story about feature decomposition, which basically says, well, whichever set of features you choose, if they're binary features, it's not going to work. Basically, because people choose binary features plus or minus n plus or minus v, because they want to be able to say, well, nouns and adjectives are kind of similar, so they're both plus n. And verbs and prepositions are kind of similar, so they're both plus n. But the trouble is that if I'm right, that you can have transpositions from anything to anything else, then whatever works really well for verb to adjective, and makes verbs and adjectives look similar, will make verbs and nouns look very dissimilar, and vice versa. So with binary features, you can't get all the transpositions neatly. But in any case, I approach this category, squish question, in a slightly different way. So the idea is, in many cases, you look at this and you say, well, is it really a verb or an adjective? Is it really a noun or a verb? And usually, the discussion is based on nominalizations or verbs. English has some lovely examples of this, but actually I gave you an example in German of a word, where are we? A word which looks, as far as the morphology is concerned, it's definitely an adjective. And as far as the syntax is concerned, it's definitely a noun, though with some adjective properties. So this is another kind of squish, in effect. And in fact, almost all of those intermediate things are really squishes. And so what I'm claying in a way, my principle of representational independence is what guarantees a certain degree of squishiness. Because if you can relate two words in any way you like, keeping the form constant but changing the syntax, keeping the syntax constant but changing the foot and so on and so forth, then you're gonna get these squishes, these intermediate categories. And so the question is, how do you get more subtle squishes? Well, you just factorize the lexicon even more. And so you have different sub-properties of the syntax attribute, for example. Argument structure properties versus pure category, category of properties. And different properties in the morphology as well, and so on and so forth. So that's basically how I get those sorts of intermediate categories. And again, I don't have to quantify this and I don't have to label any of them because what I do is I say, well, you've got the property of form and there's the property of inflectional class and whether it's morphologically an adjective or a noun and whether it's this, that and the other. And by specifying each of those separately, I can get these very subtle differences. For most ordinary words, of course it's just done by default. So I say, it denotes a thing, therefore it's a noun in the syntax, therefore it's inflicted like a noun in the morphology. And that's done by default. And this is another answer to one of Lutz's questions because we use default logic where we say, unless I tell you otherwise, assume the obvious. And that's another controversial theoretical device, use of default logics. Because to some extent you get that with distributive morphology, but for example, with HPSG and LFG multiple syntax, it's quite difficult to get default object into the syntax. So there's another orthogonal controversy that I'm relying on here or that I'm taking a stand on. So I make a lot of use of default and they're overridden by these special cases. So the intermediate cases or special cases are exceptional cases in a sense, maybe systematically exceptional, which are overriding the defaults. And the default cases are just canonical inflection and then canonical derivation. I'm sure Lutz has more questions. Yeah, the questions. This is a small question about English nationality names. Oh. So it seems like, let's say, I love the French, not I love our French. And then you can say, I love an American, but not I love the American. So what's going on there? What's going on there? Yeah, I just didn't know. That's a very good question. I'm stalling slightly because I remember reading something about this not long ago. And I don't remember who it was who wrote it. It might have been an anonymous abstract, I think it's been most of my life recently reading anonymous abstracts and anonymous papers I'm reviewing, but that's an extremely interesting question. And it's another very good example of one of these sort of squishy categories. And it's in English, it's not just ethnonyms. So it's really obvious with words referring to people's ethnic affiliation. And the broadest sense of the French, the English and so on. So it tends to work with plurals or actually tends to work with generics. So the French, if you say the French, you mean all of them. Whereas if I've got a group of students here and I've got some Italian students and some French students and some Russian students, I could say, you know, well, the Russian drinks a litre of vodka a week. But usually that would be, or Russians drink a litre of vodka a week. But I couldn't use that to refer to those Russians that are sitting in the corner there. So it's not referential in that scenario. Whereas in other languages, I could do that very easily. I could use that to refer to those particular people I've been talking to. So that was the example you were giving. And this is more general, though, because you can take adjectives in English, referring to people, and they have exactly the same property. So again, this talk seems to revolve around film titles, but you've got the good, the bad and the ugly. Now, you might think that refers to three characters in the film, but actually, the way we use those terms, they're generic and the rich and the poor. So if I divide you into those who have got more than five pounds in your pocket and those have got less than five pounds in your pocket, I'm dividing you into the rich and the poor. But I can't refer to you as, I can't say, oh, the poor can leave now. I mean, I'd have to say the poor ones or something of that sort. The poor is a generic term, and so he has exactly the same property as the French and the English. So I think it's part of a more general thing. I actually have a little note to myself to write something about this, because I would like to know the answer to your question, basically. And why English is so weird compared to any normal English? It's just a comment, really, but it seems like the French and English are more like mass towards the American, which is more like it. Well, yes, yes, because French and English, I don't know, the reason is because French and English they look more like adjectives, whereas American, for some reason, it is an adjective, but it's sort of treated as a noun. And I don't know why that is, either. In languages with decent morphology, you can tell whether it's morphologically an adjective or a noun. But in English, so, you know, these Angish delta now, they're morphologically, they're clearly adjectives, even if they behave like nouns. Whereas in English, it's just not enough morphology, so it's really quite hard to tell. But it is very puzzling. Why should American, why doesn't American behave like French? So why can't you say, you know, the American are coming? Bizarre. All the English nouns is behave like French. Well, so things ending in an, especially ending in, tend to behave like American, things ending in ish tend to behave like French. So the English, the Danish, the Dutch, the Dutch, yes. Which is, and Dutch is a nice, so French, so French is a subpleted adjective. And it's clearly an adjective, because, you know, it's normally used as an adjective, isn't it? The French flag. So it's a subpleted adjective from France, which is then transposed or turned into a noun, to mean person from, a person of that ethnicity. And it's actually of English and Danish and so forth. You've got all the Y's as well, Chinese. And the Y's to Japanese, Chinese, yes. They behave the same way as French. So there's a collection of endings, which tend to cluster in the same way. Dutch is very interesting because it's completely subpletive. So that's, because the word for, the word for the Netherlands in English is Holland. And so Holland and Dutch are completely unrelated to each other. But ethnonyms are always very, I mean, almost every language has strange things to say about these ethnonyms, but English is particularly strange because of that. And I think, as I say, it interacts with this thing about, you can use almost any adjective that can denote a person can be used generically to mean the people with that property. One kind of example, and Scotland, Scotland, Scotland. One, that's got the Scottish. Yeah, that's a nice example. Yes, that's what the language should do throughout. So there's one regular example, but typically very regular. Yes, yeah, that's a nice example. Please do join us for a drink, I guess. And Lutz can ask us further questions there in a few minutes, right? Okay, thank you. Let's thank Andre again.