 So I'm very pleased to introduce Miriam Butt today to, it was with us to speak in the acoustic seminar series, unusually on a Monday, but it's very good to have her here. She's come straight from the airport and assigned a true devotion. Hasn't even checked into her hotel yet. And Miriam received her PhD from Stanford University in 1994, having written on complex credits in Urdu, and which was published as a book, which is very interesting. And since then, she's worked on many areas, but often focusing on argument structure, complex predicates, and case. She's also written an intellectual book in the Cambridge Post, but it was in Cambridge, Cambridge University Press, called Theories of Case. And as she just told us, recently she's been working on proselying questions in Urdu. But today she's going to talk to us about complex predatory puzzles. Okay, so as I just said, I did my dissertation on complex predicates, and I didn't think that I'd be doing them still 23 years on, but hey, here we are. So I guess it was a fruitful thing to look at back then. And so what I'll do today is I'll sort of do a little bit, recap a little bit of the old stuff, and then move on to a little bit more of newer stuff. So I don't work on complex predicates that much anymore, but it's in my surroundings, et cetera. And particularly this talk, I'd like to sort of say what I think we know, and then what there is still to know, or there's many things still to know, so where one could go from there. So I'll stake out an empirical domain. I'll tell you the LFG approach. I assume none of you do LFG, otherwise I'd know, or do you? You're doing very good, can help explain. Then I'll talk about types of argument merger. This is more recent work that somebody pressed me to think about it, then talking about event semantics as a key to understanding them, and then a little bit about the diacrony, which I think a lot more stuff needs to be done and understood. Okay, so there's a lot of, so in addition to working on theoretical linguistics, I also work a lot in computational linguistics, and in computational linguistics, there's this thing called a multi-word expression, and people love multi-word expressions because they're difficult for computers, well they don't love them, they hate them, they're difficult for computers, and they also don't try to look beyond what is a multi-word expression, so it could be something like New York, or it could be a complex predicate. And that frustrates me a lot, and it frustrates me in theoretical linguistics as well, because many things get called complex predicates, just collocations or compounds, and so I want to make the point that just because things occur together fairly frequently, and mean something in that combination, some compound or some collocations, or even something like a banker at UBS is being fired, and you might, in certain times of our history, you might find banker and being fired being used a lot, or is being. That's not going to be a complex predicate, right? It should not be a complex predicate, just because you find them together a lot, but this is exactly what you see people doing in the world out there, that if there's two things that are found together a lot, they start calling them a complex predicate. So I use this example because it's obviously not a complex predicate, but there's several other examples which are harder to tease apart, but this is just a point, so if you find it together, it's not necessarily complex predicate. You need to figure out what exactly you mean by complex predication and how that differs from other things that appear together a lot. Okay, so what I've tried to do also back then in my dissertation is to establish formal properties of complex predicates so that you have a definition. You can say this is one, this is not one. Sometimes it's a bit difficult, but most of the time you can figure out these properties, then look at this as a coherent empirical domain because if you lump all kinds of stuff together and then try to explain them as one thing, you're not gonna succeed and this happens a lot in the work on complex predicates. And then talk about the diacrony of complex predicates and then a little bit about event semantics and predication. How many of you work on complex predicates before I step on any toes? You do, yeah. Yeah, right. Yeah. Nobody else? So I'm not stepping on anybody's toes here. Okay. Right, so the background assumptions I have is the groundwork in butt, which was my dissertation. And then there's been some for the developments, the most recent of them in like 2013, et cetera. And I look at two types, three types of complex predicates, permissives, a spectral complex predicates, causatives, and then the complex predicates as well. Those are four types. And Urdu, Hindi has them all. So it's quite a good language to study this. But generally, South Asian languages tend to have all of these types. So what I say here is not completely true for other languages, but it's mostly true for any South Asian languages you find. I think including Tibetan, but that's what I'm here to find out. Okay, so definitions of a complex predicate, which is what I did back then. Complex predicates are formed when you have two or more predication elements. So those are elements that can take some kinds of dependence, some kinds of arguments, and they co-predicate. And the way you can tell is if each one of those pieces adds arguments of its own into a monoclausel predication. So it's not two clauses, it's actually one clause. And it's unlike what happens with syntactic control and raising. I'll talk about that a little bit more. So the crucial thing is that you have two elements which each add to the overall predication. Sometimes it's easy to see, sometimes it's harder to see. And that co-predication is monoclausel. That means it functions just like a simple clause in terms of tense assignment, in terms of how case assignment works and in some other things as well. So how can you test for these? Well, the tests tend to be language specific. So in romance, these include critic-climbing and long passives. For Korean, Choi developed NPIs. These turn out to work for Hindi Urdu as well. And what I used as tests for Hindi Urdu were agreement, control and anaphora. So even if you have two verbs, the verbal agreement system works the same as if you had exactly one word. And anaphora works exactly the same and also in PI. So I'll show some of those examples. I don't wanna show them all because this is old stuff, but just to sort of get us there. So it's very important to look at morphosyntactic clues, but that's not gonna be enough. What you need to do is actually test for the underlying structure. And this is what happens, in my opinion, too little. So here's a nice minimal, not quite a minimal pair, but two sentences that are very similar. So these are things I looked at in my dissertation and I have talked about these a lot. So one is the permissive. Nadia na yasinkopoda kakne diyatha. So Nadia had let Yasin cut the plant. And then you can contrast this with number two, which I call the instructive. Nadia na yasinkopoda kakne diyatha. And the main difference is, or the only difference is this thing here. Well, okay, and the verb. Okay, but other, so the verbs are different. So this is say and this is give. And then this one gets a case marker here. This whole infinitive thing gets a case marker. If you wanna ask me about that, that's another story. I analyze these as, basically as a verbal noun that looks like a verb as far as the inside is concerned and like a noun as far as the outside is concerned. So this is the only difference. But in fact, if you look at how they behave in terms of their underlying structure, they're very, very different. And they behave very differently in terms of control, agreement and an afferent MPI. I'll show you, I'll tell you something about agreement. I don't have the examples with me, but I'll just tell you that. So this verb in the complex predicate, this verb will agree with the object. It'll only agree with anything that is unmarked. So this is a language where things only agree with unmarked things. So this is blocked, this is blocked. It'll agree with the object. In the instructive, it will not agree with this. This is the default masculine singular. So I'd have to give you another example where you can really see it. But this one will never agree with this object. Okay, so that's one difference. So this is really, it's not available for argument. Another way you can test for complex predication is with a negative polarity item. This cannot be distributed across two different clauses. So this kind of stuff, some answers like and some tacticians go, wait a minute. So here we have a negative polarity item that's made out of a focus particle B and the negation. So this is also, B means also, and this is the negation nehi. And you take them, they need to be together and then they mean not even, okay? So this sentence means not even a single boy let Sita read the book. So ek b, labke ne Sita ko kitab nehi parne di. So also one boy, Sita book not read, give. Okay, so he didn't let her read a book. So you can see that these are separated and it means exactly that. But if you try to do this with the instructive, it doesn't work. So if you try to say ek b, labke ne Sita ko kitab nehi parne ko kaha, it doesn't work. It doesn't get that reading, right? So you cannot separate them out like this. If you try to get this reading, you would have to say ek b, labke ne, nehi Sita ko kitab. You'd have to put the nehi somewhere up here in the matrix clause. Then it works. So if they're in the same clause, it works. If you separate them out, it doesn't work. So the instructive behaves like it's in two diff, like the verbs are in two different clauses. I've marked that here with these brackets, right? So you have a matrix clause say and you have an embedded word verb read. Whereas the permissive behaves as if these things are together in one clause. Yeah. So the agreement works this way. The NPIs works this way, et cetera. And actually you can see the agreement here too. So this is feminine singular and it's agreeing with book. Yeah. And over here, this is still masculine singular. It's not agreeing with book. It's in the masculine singular because there's nothing else it can agree with. So this is, it can't agree with this because it's overtly marked. It can't agree with this because it's overtly marked. And it can't agree with this because it's overtly marked. So it reverts to a default masculine singular. Okay. Questions about this? And before the gift also? I mean this one? Yeah. Ek b, labke ne, Sita ko kitab. Pardon me. Yeah. No. Pardon me. You would think it does, but it doesn't. So. Did it. I don't know. I just wanted to. Yeah. Did you want to? Yeah. How do you determine whether it is impossible? You asked natives. What are the corpus based? This type of work has been native speaker based. Okay. Mainly. And these negative polarity judgments are by Rajesh Bhatt. Occasionally I've been looking at corpus or but not for this work, for other work. Yes. Yeah. You wanted to follow up on litigation. I want to ask you a question about the give and agreement book. Yeah. Does it matter that you can give a book and let it agrees or is that irrelevant? That is partly irrelevant. So I've glossed it as give. It really means let in this context. But it is true that the arguments contributed by this. We'll see it in a bit. Our book and our Sita. And yes. And there's also this one. So yeah. So these are its natural arguments. No. What I said was wrong. We'll move on. Okay. We'll become clear with, I have some analysis. But basically the point here is these, even though they don't look like it, are basically one verbal predication. And they fuse together. So I'll show you how that is in a minute. And these ones are separate. Okay. So all of these, I've said this basically, permissive is a monoclausor complex predicate. Instructive is a by-clausor. So here's some frightening LFG structures. So here's a normal, so this is the instructive, which I analyze as a by-clausor control structure. Basically what I've said before, right? So there's two verbs. One tell, say interpreted as tell. And one cut. They each have their separate argument domains. Cut takes an agent and a patient. And the tell, the way I analyze this, you have an agent. You have somebody who gets told something. And then you have a theme. And I've said theme or event. Because I treat those basically at the same. Either you tell them some, you know, a thing. Or you tell them a whole sort of predication itself. So that would be to go cut the treat. It would be an event. And the way this works out, it looks scary, but it's not that scary. So you have two separate A structures, argument structures. And they correspond to, this is the matrix predicate, tell, say. And it has a subject, an object, and an open complement. That means its subject is controlled by something else. Its subject is Nadia. Right, Nadia told Yasin. It's object goal. So whoever got told to do something is Yasin. You have some 10s aspect stuff. And then you have this open complement, which is the other verb, cut. Which has a subject and an object. The subject is identical with the object, with the indirect object of the matrix clause. And the object is its own. Okay, so this is what a typical control construction looks like in LFG. You assume you have two verbs that are giving you arguments, that are giving you their own arguments. And then you link it up in terms of the sort of, you can think of this as a dependency structure. Just in LFG terms. And you can see there's two domains, one here and one here. Okay, so this is standard stuff. Here comes the analysis of the complex predicate. Nadia let Yasin cut the plant. Again, you have the argument structure. You have two predicates, give let and cut. But you see, I've put them together here. So I say, well, this one actually, can I write on this? Do I have something to write with? Magic. There's something here. Nope. Somebody must have written on it because you did wipe it away, right? Okay, maybe we'll keep going then. So what give let has, what I assume. Oh, look at that. Thank you. So as one would expect, that give has three arguments. An agent, a goal. And like the let, like the talk, whatever, it would have a theme. And again, I would say it could have an event as well. Okay, so I assume nobody would quarrel with me for this. So you have three arguments, agent, goal. And then this one, the way it works in my system is that in this, this one gets the, the argument structure for cut gets substituted in here. So that's what I've done up there. So give let has three arguments, one of which is actually another argument structure, another predicate. So that's being substituted in, so that's what you see up there. So you do the substitution, but then the way these complex predicates work, you also have to identify two of the arguments. And I assume there are systematic rules about which ones you can identify. In this case, it has to be the highest one here and the lowest one there and they get put together. So what ends up happening is that you have a three-place predicate, a combined three-place predicate. You have an agent, a goal, that's also the agent of the other predicate and a patient. And that translates to a complex predication, let cut with three things in it, subject, indirect object, and an object. The subject is Nadia, that's the agent. The indirect object is Yasin, that's this one. And the object is plant. And you basically get a monoclaw structure, right? So you don't have any embedded predicates. So that's my analysis of these things. Questions, so far so good. And I'll talk a little bit more about, so this is what I did in my dissertation and people said, well, can anything go? And I thought about it a bit more and I decided anything cannot go. There are rules about this, so I'll show you those in a minute. But what I wanted to do first is give you another example. So this is based on a dissertation that was done in Constance. Again, two things that look very similar on the surface. So seven A, Nina me by her, Nina is fearful. Literally there is fear in Nina, so you have Nina locative. This is fear and this is the copula versus Nina go by him. So Nina is afraid. So the only difference is, again, a case marker. May versus Co, otherwise the word order is all the same. Looks all the same. So if you're just looking at the surface, you think, well, these are the same thing. But what Zolga did, he tried, I mean, this is a very small predication. So it's very difficult to do sort of tests with anaphora and agreement and all this kind of stuff. But he did try and he actually figured out from the test that these behave differently. This is more like a copula construction which has become metaphorical and this is a complex predicate. It turns out this one looks like these two, the B and the fear have combined to do a single predication. Whereas this one really says there's fear at Nina and then you can interpret that whichever way. So these languages don't have a have verb, right? So this is how you do have or is. So some tests, some more tests for NB complex predicates, the ones I told you about before they work well for verb-verb complex predicates. So you can look for a contribution of an extra argument by the noun. Determination of the case on arguments of the noun. None of these will be hard and fast but you can look at these. And then one that is due to Carnes who wrote on complex predicates in English. You cannot substitute the noun by a pronoun or WH phrase. So if you ask here, Nina, what is that won't work? Or Nina, it is, Nina, it is that won't work. The complex predicate reading goes away. Whereas if you ask here, Nina, what is? You can say, well, what is going on with her? What is in Nina? Well, she might be afraid, so you can answer that. And you can pronominalize this as well but you cannot do that here. So you get tests like that. So these tend to work quite well even when it's just short predications like this. Tests that I've found not to be reliable for complex predicates, which people do like is linear adjacency, whether things are next to one another, whether they can scramble. So the complex predicates I've looked at, they can all scramble. The default tends to be that they're next to each other but that doesn't have to be the case. Putting negation in places, I think that's what you were asking about or other adverbial modification. That's not, these are all indicative but they're not really good tests because they seem to be more about really surface structure, constituency and scope. But they're not testing the underlying sort of argument structure, what's going on. And if you look at morphological causatives, you can't even put if you have a verb and a causative and you're trying to figure out, so if you have a verb and a causative morpheme and you're trying to figure out is this a byclosal or monoclosal or monoclosal which they could be in principle, I guess. It's really hard to put anything in here, right? Negation or adverbs. Because this is a morphological thing so it's really hard to just put something there. Okay, so here's some more noun verb complex predicates. Here's one where you can see that the noun licenses an extra argument. So this is not fear but love but otherwise these are the same kind of thing. So you cannot say, let's start with this one. You can say, Nina ko Yasin se piar hai. So Nina, that's a typo. Nina loves Yasin. Literally, to Nina is love and if this were a real noun then this should be a genitive, right? Because usually if you have a noun and you have dependence, they appear as genitives. So to Nina is love of Yasin but you're not saying it this way. You're putting an instrumental here and this instrumental is not a nominal case. It's an indication that these things are actually combining and giving you a whole sort of combined argument structure. You cannot do with this with a copular construction with an in, you cannot do that. It just doesn't work. Again, we had some native speaker judgments, not just one but several in this one. Questions about this? Okay, so in this case, these two are together. If you want to, I can write the argument structures on the board and in this case, this is the copular and it's placing this thing in relationship with that thing. Right, so now we've had some examples of complex predicates and how to test for them. Now, how do we get, how do we do, what are constraints and argument mergers? Okay, so this is, LFG has actually done a lot of work on complex predicates as opposed to other theories. I think we are really the most advanced with respect to this thing. I'm not the only one who has worked on complex predicates. I have done a lot, but there has been others. So the current state that everybody agrees on is that complex predicate formation involves a complex argument structure with embeddings. That's what I've written up there. But it corresponds to a simple functional structure. That's the example I showed you, right? So there's a mismatch between the underlying argument structure and the actual syntactic dependency structure. Now how can you get complex predicates? All the ones I've shown you are perifractic. So there are two independent syntactic items which are difficult for theories, syntactic theories in general, because most syntactic theories think there should be one main predicate and none of this combination business. So it was actually hard to do an LFG as well. So you can get these. But you can also get them via morphological means. And with an LFG, it actually doesn't matter much. Whether you have two syntactic items or whether you have a verb plus a morphine, like a causative morphine. Because what you do is you have the same type of thing, you have two argument structures, they get combined, and they link to a monoclosal functional structure. So it doesn't matter very much whether it's something like this or whether it's something like this. You have some kind of cause, some agent who makes somebody do same thing, some goal or causey. And then there's some thing they make them do. And that thing might be cut agent-patient. So I make you cut the tree. It would look exactly the same thing. And in this case, there might be two verbs. And in this case, it might be a verb plus a causative morphine. So we treat both of these cases the same. We sort of abstract away from morphology versus syntax. That's one of the things LFT does. And I found it very useful actually, because it makes you think about the underlying structures rather than the surface sort of thing. And this treating these two the same actually goes back to Alcina who argued that for Bantu and romance complex predicates. Okay, then so far so good. There also seem to be two types of argument merger. This is based on a dissertation by Rosen. She's not an LFG person, but she looked at argument merger and romance. And I proposed after thinking about this a bit that there's basically only two types of how you can merge argument structures. And that these actually mirror what we know from syntax. They mirror what has been identified as control versus raising in syntax. So that's what I propose. I'm not so sure about this proposal. I haven't tested it all the way, but this is what I propose. All this other stuff I've been telling you about that has withstood the test of time. So that seems to work. This part is newer, but I think it makes sense. So let's, I'd like to put it out there to see whether you have languages, et cetera, where this works or not work. Okay, so argument identification at the level of syntax, so F structure in LFG terms has been called control and raising, right? We all know control and raising, yes. So it seems to be raining, would be raising. I want to go home is, no, I don't want to go home at this point. I want to have a beer, is control, okay? But in each case, you have an embedded structure and there's some kind of identification. So I propose similarly argument identification exists at the level of A structure and this leads to complex predication or people have called it clause union or argument merger, et cetera. So here's the table. So if you have at syntax, if you have control, then you have an embedded subject usually controlled. If you are raising, then people have talked about this as exceptional case marking, but in neither cases, there's going to be a complex predicate. It's only when you combine things at the level of argument structure and if you have a controlled argument like there, right? So these two would be identified, I'll just call them both I. So these two will be the same. That's what I would call a control configuration. I have it in the slides as well. Or you could have just basically a unification of both types of argument predications and that's what I would call raising. And in both cases, that would be complex predicates. So I'll sort of, so I have some examples. I'll show them to you in a minute. Before I wanted to draw a parallel to other theoretical assumptions, so there's Jillian Ramchand's first phase syntax in minimalist terms, where she proposes that this is a fashionable thing right now in minimalism, this little VP. And little VP is basically where your argument structure happens. And in Ramchand's terms, complex predication happens within the little VP, whereas control raising would happen above that, so above the big VP. So if you happen to be familiar with this theory, that's the parallel there. If you are more into force-dynamic interpretations than Lakhtami and Croft, et cetera, what you get is you get, if you think about it in terms of sub-events, they merge into one complex event, but have a primary predication that's completely native speaker-based, okay, mainly. And these negative polarity judgments I buy Rajesh, but occasionally, I've been looking at Korpa, but not for this work, for other work, yes. Yeah? You wanted to follow up on litigation. Yeah, I want to ask a good question about the give and agreement book. Yeah. Does it matter that you can give a book and let it agrees or is that irrelevant? That is partly irrelevant. So I've glossed it as give, it really means let in this context, but it is true that the arguments contributed by this. We'll see it in a bit. Our book and our Sita, and yes, and there's also this one. So yeah, so these are its natural arguments. No, what I said was wrong. We'll move on, okay? We'll become clear with, I have some analyses. But basically the point here is these, even though they don't look like it, are basically one verbal predication. And they fuse together, so I'll show you how that is in a minute. And these ones are separate. Okay, so all of these, I've said this basically, permissive is a monoclausor complex predicate, instructive is a biclausor. So here's some frightening LFG structures. So here's a normal, so this is the instructive, which I analyze as a biclausor control structure. Basically what I've said before, right? So there's two verbs, one tell, say interpreted as tell, and one cut. They each have their separate argument domains, cut takes an agent and a patient, and the tell, the way I analyze it is you have an agent, you have somebody who gets told something, and then you have a theme, and I've said theme or event, because I treat those basically at the same. Either you tell them something, you know, a thing, or you tell them a whole sort of predication itself. So that would be to go cut the tree, would be an event. And the way this works out, it looks scary, but it's not that scary. So you have two separate A structures, argument structures, and they correspond to, this is the matrix predicate, tell, say, and it has a subject, an object, and an open complement. That means it's subject is controlled by something else. It's subject is Nadja, right? Nadja told you, see? It's object goal, so whoever got told off, got told to do something as you're seen. You have some 10th aspect stuff, and then you have this open complement, which is the other verb, cut, which has a subject and an object. The subject is identical with the object, with the indirect object of the matrix clause, and the object is its own. Okay, so this is what a typical control construction looks like in LFG. You assume you have two verbs that are giving you arguments and then you link it up in terms of the sort of, you can think of this as a dependency structure, just in LFG terms. And you can see there's two domains, one here and one here. Okay, so this is standard stuff. Here comes the analysis of the complex predicate. Nadja let Yasin cut the plant. Again, you have the argument structure. You have two predicates, give let and cut, but you see I've put them together here. So I say, well, this one actually, can I write on this? Do I have something to write with? Magic, there's something here. Nope, somebody must have written on it because you did wipe it away, right? I did, yes. Okay, maybe we'll keep going then. So what give let has, what I assume, oh, look at that, thank you. So as one would expect that give has three arguments, an agent, a goal, and like the let, like the talk, whatever, it would have a theme. And again, I would say it could have an event as well. Okay, so I assume nobody would quarrel with me for this. So you have three arguments, agent, goal, and then this one, the way it works in my system is that this one gets the argument structure for cut gets substituted in here. So that's what I've done up there. So give let has three arguments, one of which is actually another argument structure, another predicate. So that's being substituted in, so that's what you see up there. So you do the substitution, but then the way these complex predicates work, you also have to identify two of their arguments and I assume there are systematic rules about which ones you can identify. In this case, it has to be the highest one here and the lowest one there and they get put together. So what ends up happening is that you have a three place predicate, a combined three place predicate. You have an agent, a goal, that's also the agent of the other predicate and a patient. And that translates to a complex predication, let cut with three things in it, subject, indirect object and an object. The subject is Nadia, that's the agent. The indirect object is Yasin, that's this one. And the object is plant and you basically get a monoclausel structure. So you don't have any embedded predicates. So that's my analysis of these things. Questions, so far so good. And I'll talk a little bit more about, so this is what I did in my dissertation. Then people said, well, can anything go? And I thought about it a bit more and I decided anything cannot go. There are rules about this. I'll show you those in a minute. But what I wanted to do first is give you another example. So this is based on a dissertation that was done in Constance. Again, two things that look very similar on the surface. So seven A, Nina me by her, Nina is fearful. Literally there is fear in Nina. So you have Nina locative. This is fear and this is the copula versus Nina go by him. So Nina is afraid. So the only difference is again, a case marker. May versus Co, otherwise the word order is all the same. Looks all the same. So if you're just looking at the surface, you think, well, these are the same thing. But what Zolga did, he tried, I mean, this is a very small predication. So it's very difficult to do sort of tests with anaphora and agreement and all this kind of stuff. But he did try and he actually figured out from the tests that these behave differently. This is more like a copula construction, which has become metaphorical. And this is a complex predicate. It turns out this one looks like these two, the B and the fear have combined to do a single predication. Whereas this one really says there's fear at Nina. And then you can interpret that whichever way. So these languages don't have a have verb, right? So this is how you do have. So some tests, some more tests for some tests for NB complex predicates. The ones I told you about before they work well for verb, verb complex predicates. So you can look for a contribution of an extra argument by the noun. Determination of the case on arguments of the noun. None of these will be hard and fast, but you can look at these. And then one that is due to Carnes who wrote on complex predicates in English. You cannot substitute the noun by a pronoun or WH phrase. So if you ask here, Nina, what is that won't work? Or Nina, it is, Nina, it is that won't work. The complex predicate reading goes away. Whereas if you ask here, Nina, what is? You can say, well, what is going on with her? What is in Nina? Well, she might be afraid, so you can answer that. And you can pronominalize this as well, but you cannot do that here. So you get tests like that. So these tend to work quite well, even when it's just short predications like this. Tests that I've found not to be reliable for complex predicates, which people do like, is linear adjacency, whether things are next to one another, whether they can scramble. So the complex predicates I've looked at, they can all scramble. The default tends to be that they're next to each other, but that doesn't have to be the case. Putting negation in places, I think that's what you were asking about, or other adverbial modification. That's not, these are all indicative, but they're not really good tests because they seem to be more about really surface structure constituency in scope. But they're not testing the underlying sort of argument structure, what's going on. And if you look at morphological causatives, you can't even put, if you have a verb and a causative, and you're trying to figure out, so if you have a verb and a causative morpheme, and you're trying to figure out, is this a byclosal or monoclosal, or monoclosal, which they could be in principle, I guess. It's really hard to put anything in here, right? Negation or adverbs, because this is a morphological thing, so it's really hard to just put something there. Okay, so here's some more noun verb complex predicates. Here's one where you can see that the noun licenses an extra argument. So this is not fear, but love, but otherwise these are the same kind of thing. So you cannot say, let's start with this one, you can say, nina ko yacin se piar haes, so nina, that's a typo, nina loves yacin. Literally, to nina is love, and if this were a real noun, then this should be a genitive, right? Because usually if you have a noun and you have dependence, they appear as genitives. So to nina is love of yacin, but you're not saying it this way. You're putting an instrumental here, and this instrumental is not a normal case. It's an indication that these things are actually combining and giving you a whole sort of combined argument structure. You cannot do this with a copular construction with an in. You cannot do that. It just doesn't work. Again, we had some native speaker judgments, not just one, but several in this one. Questions about this? Okay, so in this case, these two are together. If you want to, I can write the argument structures on the board, and in this case, this is the copular, and it's placing this thing in relationship with that thing. Right, so now we've had some examples of complex predicates and how to test for them. Now, how do we get, how do we do, what are constraints and argument mergers? Okay, so this is, LFG has actually done a lot of work on complex predicates as opposed to other theories. I think we are really the most advanced with respect to this thing. I'm not the only one who has worked on complex predicates. I have done a lot, but there has been others. So the current state that everybody agrees on is that complex predicate formation involves a complex argument structure with embeddings. That's what I've written up there. But it corresponds to a simple functional structure. That's the example I showed you, right? So there's a mismatch between the underlying argument structure and the actual syntactic dependency structure. Now, how can you get complex predicates? All the ones I've shown you are perifractic. So there are two independent syntactic items which are difficult for theories, syntactic theories in general, because most syntactic theories think there should be one main predicate and none of this combination business. So it was actually hard to do an LFG as well. So you can get these. But you can also get them via morphological means. And within LFG, it actually doesn't matter much. Whether you have two syntactic items or whether you have a verb plus a morphine, like a causative morphine. Because what you do is you have the same type of thing. You have two argument structures. They get combined and they link to a monoclosal functional structure. So it doesn't matter very much whether it's something like this or whether it's something like this. You have some kind of cause. Some agent who makes somebody do same thing, some goal or causey. And then there's some thing they make them do. And that thing might be cut agent-patient. So I make you cut the tree. It would look exactly the same thing. And in this case, there might be two verbs. And in this case, it might be a verb plus a causative morphine. So we treat both of these cases the same. We sort of abstract away from morphology versus syntax. That's one of the things LFT does. And I found it very useful actually because it makes you think about the underlying structures rather than the surface sort of thing. And this treating these two the same actually goes back to Alcina who argued that for Bantu and romance complex predicates. Okay, then so far so good. There also seem to be two types of argument merger. This is based on a dissertation by Rosen. She's not an LFG person, but she looked at argument merger and romance. And I proposed after thinking about this a bit that there's basically only two types of how you can merge argument structures. And that these actually mirror what we know from syntax. They mirror what has been identified as control versus raising in syntax. So that's what I propose. I'm not so sure about this proposal. I haven't tested it all the way, but this is what I propose. All this other stuff I've been telling you about that has withstood the test of time. So that seems to work. This part is your, but I think it makes sense. So let's, I'd like to put it out there to see whether you have languages, et cetera, whether this works or not work. Okay. So argument identification at the level of syntax, so F structure in LFG terms has been called control and raising, right? And we all know control and raising, yes? So it seems to be raining, would be raising. I want to go home is, I don't want to go home at this point. I want to have a beer is control, okay? But in each case, you have an embedded structure and there's some kind of identification. So I propose, similarly, argument identification exists at the level of A structure. And this leads to complex predication or people have called it clause union or argument merger, et cetera. So here's the table. So if you have at syntax, if you have control, then you have an embedded subject usually controlled. If you are raising, then people have talked about this as exceptional case marking. But in neither cases, there's going to be a complex predicate. It's only when you combine things at the level of argument structure. And if you have a controlled argument like there, so these two would be identified, I'll just call them both I. So these two will be the same. That's what I would call a control configuration. I have it in the slides as well. Or you could have just basically a unification of both types of argument predications. And that's what I would call raising. And in both cases, that would be complex predicates. So I'll sort of, so I have some examples. I'll show them to you in a minute. Before I wanted to draw a parallel to other theoretical assumptions, so there's Jillian Ramchand's first phase syntax in minimalist terms, where she proposes that this is a fashionable thing right now in minimalism, this little VP. And little VP is basically where your argument structure happens. And in Ramchand's terms, complex predication happens within the little VP, whereas control raising would happen above that, so above the big VP. So if you happen to be familiar with this theory, that's the parallel there. If you are more into force dynamic interpretations than like Tommy and Croft, et cetera, what you get is you get, if you think about it in terms of sub-events, they merge into one complex event, but have a primary predication. That's complex predicates, okay? So you wouldn't have secondary predication. And I'll talk about this a little bit more. Okay, okay, so this is just this. Move on. Okay, so now different argument mergers. So I'll go back to the permissive examples that we had before, because you can see that quite nicely here. And there's been some debate between me and some minimalist colleagues about how to exactly analyze these. I'll show you what I think. So this is the argument fusion, which I think is analogous to syntactic control. So maane bachan ko kataa ve parne di. This is the example you've seen before. So we have give has three arguments. Mother, children is the indirect object, the goal. And then this thing, reading books, that's the event. So then you have, instead of cut over there, you'd have read, read something, an agent and a patient. And the agent of the reading is also the indirect object or is the goal of the allowing. So this is the one that has two roles at argument structure. Again, you can see the agreement going with books here. Okay, but then there's this construction also allows for another way of looking at it. So if you look at ten, pita ne per karne di. Father allowed the trees to be cut. There's nobody there who's doing the actual cutting. That person is sort of unsaid, understated. So how can one think about this? And I think that it's analogous to syntactic raising. So the permissive, this one was subject, like I said, to a lot of analysis by Alice Davison. And it was analyzed by her as syntactic raising. So not as a complex predicate, but syntactic raising. And by Rajesh Bhatti gives an even more, he gives a more complex analysis as raising, but with restructuring in the sense of warm brunt, which is sort of a difficult concept. Anyway, I disagreed with both of those and I showed that both of these types, so this one as well as the other one have to be analyzed as complex predicates. So these ones work exactly like the argument fusion ones. So I wouldn't want to do it in terms of syntactic, so not in terms of a bi-plausal analysis. So here's what I think is going on. So this is the picture you have on the board there, right? So you have the allowed to do reading. This is the argument fusion. You substitute that in and then you say, okay, these two have to be identified. But there's also this other reading, this allowed to happen reading. This is due to Alice Davison, that's what she called it. Allowed to happen reading. Where arguments from both predicates are taken together but no argument fusion happens. Okay, so basically what happens is this thing gets raised to the combined domain. So that is what you have there. And then you just get a subject and an object like that. So father, trees, cutlet. But you don't have anybody in between. Questions about that? The case marker on the trees, that's the normative that is like object, case marker. So I should say that the unmarked case, there is a tradition now within Uduhini that you gloss it as normative and it can be either on subjects which are not in the perfect, et cetera, or it can be on objects. So in this case it's dictated by it's sort of object with complex. But you could also, Uduhini also has differential object marking. So you could also put an accusative on here. Pitan-e-per-co-garni-diya. And then it would be like a specific tree. So that's just independent. It works just like a simple clause in that case as well. Okay, so here's the correspondence. So you have lead, patient, and if this one just corresponds to the subject and this one corresponds to the object. So you have a monoclosal predication again at the level of syntax, but a bi-closal one at the level of argument structure. So complex predicate. And not syntactic raising. This would be syntactic raising. Yasin can cut the plant. This is how Rajesh Bhatt and a bunch of us analyzed it. So you have a can which has a theme and an event and a non-thematic subject and you have a cutting. So here's your can. It has a non-thematic subject and an ex-comp and this subject basically gets raised to the subject of the matrix event. So that's what that would look like. So very different from this predicate here and a predicate there. Okay, so now let's talk about verb-verb complex predicates which are interesting as well. And these have been described in a number of works all over the place. They're ubiquitous in Udu Hindi, so many people have worked on them, but particularly Peter Hook. So Nadia na khat likliya, Nadia wrote a letter completely. Nadia na bakan benadia, Ram Gauta, so Nadia wrote a house. Ram sang out spontaneously. What you do is you get two verbs in a row. The first one is in the stem form. There's no other morphology, but it turns out this is an old participle and in other languages you will find old participle morphologies still hanging about here. And this is the finite version. They are harder to separate than the permissive, but you can separate them. You can put stuff in between them. This is a case where you had many people say nothing can come between them and then you start watching Bollywood movies and you think, wait, they just put something between those things. So in real life you can for sure. Right? So as with the permissive, one tends to call these things that are the head of the complex predicate a light verb. I've sort of glossed over that. But in this case, the light verb seems to be lighter than what we saw in the permissive. And the permissive was a straightforward thing, give or letting somebody do something that seemed quite clear. Here there's no taking involved. There's not really any giving involved. And here there's well, I think maybe only in a metaphorical sense, very sort of, a lot of the times these are very, very light, light, light verbs. And you don't have an independent argument to the overall predication. And they're all completely, they all seem to be completely. So something as specialist going on. And they also have done stuff like this with people have written about a lot, suddenness, responsibility, benefaction, surprise, et cetera. Okay, so we analyze these as instances of event modification, of event fusion, and I'll show you what that is about. And this is a different type of complex predicates. There's no embedding of argument structures like this. It's different again. Let me show you some more things about these. It's gonna say, right. So there's no extra argument that these verbs contribute. So it's, there's even one missing here from give. But what turns out that these light verbs, they determine the subject case marking. So if you have an unaccusative verb like this, the subject will be non-native. And it doesn't matter what's here. And if you have a transitive or di-transitive verb like this, then the subject is orgative, okay? So even though you can't seem independent arguing that's been contributed, it's very clear that this thing is playing a role. It's not just there and doing aspect or something. It is actually saying, hey, I have an agent argument or a patient argument in the case of the unaccusative and I'm telling you what the case marking of the subject is gonna be. So that is very, very robust, this. Okay, so that needs to be counted for. Okay, other characteristics of light verbs, they always form identical with the main verb, it turns out, as far as I can tell. This is a proposal that's out there. And I was in a project once run by Aditya Lahiri and I was interested to see where light verbs come from, how do they develop, grammaticalization, et cetera. And as far as I could look back, they were always there. So maybe not the same ones, but I couldn't identify a part in the Aryan and in the Aryan goes back 5,000 years where they weren't there. So from what we knew as well from other languages, we proposed that actually the light verb and the main verb versions, actually the same thing. There's one underlying entry and you can ask me more about this, I can try to explain it, but I think one needs to go deeper here, but I think what's going on here is that there's one entry and you can either deploy that thing, so take or give or whatever, either as a main verb or as a light verb and it depends on what syntactic configuration you're in. Now grammaticalization happens, but it turns out from what I could see, if you do get grammaticalizations like go in Urdu Hindi has become the new future marker, it tends to be from the main verb version of it. And you can ask how you know? Well it never has a light verb, it doesn't come from the complex predicate version. It seems to come from a by-claws of predication, I go to the market, ends up meaning will go to the market. I go go to the market, ends up meaning I will go to the market. I mean this happens in a lot of languages and it seems to be via the main verb. So this is something that I think needs to be looked into more, so as far as I can see they're stable, they don't change. This goes against what people have talked about in terms of grammaticalization, where there's a strong belief that what you have is you have a verb and it bleaches to become a light verb and that bleaches further to become an auxiliary or a modal or something like that. And what I could not find is this step. This doesn't seem to happen, so you tend to go that way. So I'm not gonna talk about this very much here, but it's tricky figuring these things out because again you have to be able to try to look at the underlying structure and if you're looking at historical corporates it's not always that easy. But this is what seems to be happening. So unlike what the grammaticalization literature thinks with that you have the stages of successive bleaching, that doesn't seem to be the case. What seems to be the case is that you can either use this verb thing as a main verb or as a light verb. Okay. So the open questions that are there, which I get and which I fair, so how are light verb versions related to this underlying lexical semantic representation and what is it exactly? So what should it be? And in as far as I have thought about it, I think what should be in there in this underlying representation, so in your lexicon. There should be some information about valency, how many argument slots you have and what type you like. Some lexical semantic information so you can figure out the case markings. So you know this thing is gonna be marked as an agent, this thing is gonna be marked as an experiencer and you will have to have some information somehow about actions art. So what kind of a verb am I? Am I sort of an activity type verb or not? That seems to be important. Also, am I unaccusative or unurgative? But that has to do with agency. And most importantly, I think what needs to be in there is information about the event semantics and I think what's happening when you, the main difference between a light verb and a full verb is that the full verb says, I'm an event. I'm a whole event. And the light verb just says, I'm going to modify an event. So I think the main difference is that you're not deploying your sort of, your event semantics, leaving that out. But what you are deploying is some of this stuff but not your event semantics. Okay. So now I get into a bit more hairy waters. This is not something that people who, so theoretical linguists tend to think about this but they don't look at interesting languages and the other way around, event semantics is not something that is made into sort of the typological functional literature which I think is a shame and I think that needs to happen. So the way I've been thinking about that is to think about events also in terms of sub-events and I've worked on them in my dissertation, I used these lexical conceptual structures based on Jack and Dove. But that system is too unconstrained and there's been some ideas about how to do that. So how much time do I have? About 10 minutes, are you sure? But then I won't have questions, right? You may go 10 minutes, yeah. Okay. Because then I'll do those events and then I think I'll stop. Okay, so here's some stuff. If you thought things were bad, we can look at Mullen-Pata which is an Australian language and it contains complex predicates that make my head sort of swim. The Australian language is somewhat backwards. So they have a lexical stem and a classifier which is the light verb and they're different, the combinatorial possibilities are different from what we've seen so far. So what you need to know is lexical stems, you have those, they can't be alone, classifier stems can't be alone and the classifier stems, they're used to classify the kind of event described. So that's where they're similar as the Urdu ones where I think that what the light verbs and Urdu are doing is they're giving you a bit more information. They're doing some event type of modification. They're saying what kind of thing is it? This is completely, this is for the benefit of somebody else. So it's modifying, this is the main event, making, writing, singing and then these are there to say there's, there's a little bit of an extra thing I want to tell you about the event. And that is what the Murumpatta things are doing as well. So the classifier stem says what kind of an event is done. Like a long thing object or a flat object. So they have this way of glossing these classifiers. So bash 14, poke 19 and slash 23 can all be used in predications of cause contact but poke implies long pointy objects, bash flat objects and slash the long side of a stick. So now this is data you found in streets dictionary. So you can do things. So this is always numb, right? Parang, however you say this. And Murumpatta has a hugely complex morphological morphological system which I won't try to get into but what you can see is you always have this classifier here at the beginning and it comes with singular, it comes with person number morphology. So if you have poke, it means I'll numb your hand by injection, I make him numb with a stone spear. If you have bash and if you have slash, I'll numb your foot with a stick. Okay. So numb them in all sorts of ways. And this is apparently quite productive. So Rachel Nordlinger has done work on this and people can do it and they can make up new things if you ask them to. So Melanie Zeiss, again a dissertation at Constance, she was trying to figure out what the restrictions are on how to combine these things. So can anything just happen with anything? What are the argument structure restrictions or what's going on? And she ended up saying, well there's some kind of blueprints for language and that the individual pieces of a joint predication, they slot into the overall blueprint. And that explains for this language why they can't appear in isolation. So here's what I'll show you what she means with this. So she's saying, okay, there'll be some kind of a blueprint. If I say something, there was an event and it caused a change of state. So this is not Jack and Delfts type stuff. What you have is a cause where something causes something else, this beta thing to be with some kind of result, okay? So something is causing something else to have some kind of a result. This is the change of state. And then you can say, by what this thing is causing it, some kind of instrument, which is non-agentive, is what this means. So this is sort of what you try to express, the semantics. Then you say, okay, what are the bits that I need? So I have mel, which means the result is gonna be flat. And I have slash, which says with a long side of a stick. So you stick in the flat here and the long side stick. And then you have things like flatten with a long side stick or flattened by it. So the idea is that this is a given. This is your semantics that you're trying to express. And then the language is sort of trying to gather together. It's bits to express that predication. And it can't do it all at once. So that was kind of fun. So it's not about two bits and how do I combine the two bits? It's more about this is my thing and what are the bits that I can find to fit in there? And if you get things that do not fit into the blueprint, then the combination would not happen. Okay. Where I want to go with that is sort of again this Ramchand first face syntax. What she has, she has some strange trees, but what she has as a really good insight I think is that a basic event, this little VP, is always decomposed into an initiator, a process and a result. This is very different from what you see in the literature generally. What you see in the literature generally is something like Jack and Dove. There's a cause and there's a result. There's two things. What Ramchand argues for is that there's three things that you have to decide to go for, an initiator, a process and a result. And I think that's quite interesting because those show different slots or where you can slot different things in. So if you can think of a complex predicate as what that is doing, it's each part of the complex predicate is instantiating a part of that sort of event semantic or predication of blueprint. So Jillian Ramchand and I actually did an analysis together once where we said, okay, so what about these things? What's happening here? And we looked at them and we took very seriously that this right is actually an old participle form and this take. Nature speakers will think that the resultive part is completely is coming from here. But if you look at the structure of it, what this must be doing, it must be giving you the process. And because it's an old participle form, also the result of an event. And this one, remember, tells you what the agent should be or what the subject should be marked at. So this one is probably marking the initiated part of the predication, so the cause part. So that's how we analyze these. I'll do this sort of in the sort of jack-and-off idea. So what we would have is you would have some kind of event that has an initiation, a process and a result. And here's what Leclerc would look like. So you have sort of initiation. That's where this one contributes that. And the road contributes these two things. But since the road contributes these two sub-events, but the argument involved in them, in this case is the same, what you end up is just two arguments. Nadja, which is affecting this letter. This is what this effect plus means. Okay, so again, one can start thinking about it in the sense that what we have is we have these verbal event structures. We have event structures that have three things in them and what you need to do in a language is to sort of fill in these bits and pieces. So in this case, instantiate the initiation, the process or the result. If you have an activity with no result, then you wouldn't instantiate this. But if you have a result, then you need to instantiate this. If you have something that doesn't get initiated, like an unaccusative verb, then you wouldn't have this. So you have different versions of that. So, okay, now this last one I'll finish with this. So if you think about event semantics in whichever way you want to model it, but you get a clear distinction between auxiliaries and light verbs, which is also something that needs to be done. So light verbs, they contribute to an independently existing event predication. So they'll do some sub-event of that. They'll do some event modification. So auxiliaries situate an event in time. They don't slot in anywhere here. They take this whole thing and say, okay, you were in the past or you're going to be in the future or you're right now, but they're not going to slot into any one of these things. And that's an important distinction to think about. And the model situate an event with respect to sort of possible worlds. This may happen, that may not happen. Again, they're not going to slot in any one of these things. They're going to take this whole thing and situate it somewhere and say this whole thing may happen or may not happen. So I think this is an important way to distinguish auxiliaries and models from primary event predication. So they never form complex predicates under my worldview. And they will be subject to dichronic reanalysis unlike light verbs, which are in this basic event predication. Okay, I think I'll stop there. No, yeah. Problematic or serial verbs, these sort of super events, things like I climbed the tree looking for insects versus I climbed the tree and saw stars. That's bad. Climbing the tree looking for insects is good. And these are clearly two separate events, but there's something about these two events are natural and these two aren't. So I can do these as a serial verb and not the other ones. And I have no idea how anybody can do those in a good form or way so far. So that's an open question as well. Okay, I'll stop there and take questions. So the last bit was a bit fast on the events of ethics, but I'm happy to do that again. So about light complex where I was affecting the case marking pattern and that being a diagnostic, there was one example where, well okay, so, but oxes I guess don't meet that criteria. But I was wondering, well sometimes oxes seems like maybe they affect the case, like split-break activity might be related to test marking for instance, which is maybe an ox thing and maybe there's other examples. As I said, just because I'm just kind of thinking about ways to tell from looking at a sentence that it's a complex predicate as opposed to trying to understand the events semantics, which is more tricky. So I like using case as a diagnostic, but I just wonder about oxes. Right, so that's a good question because with the split-break activity that's based on like past tense or perfective forms, et cetera. It is the verb. Yeah, well it depends. I mean often it is the verb form that triggers it, right, because it's some kind of old participle, but yeah. Yeah, I mean my instinct is that that should be a separate sort of thing because that's about a whole structural sort of organization of the case marking rather than individual verbs doing what they're doing, right? But yeah, but that's sort of tricky to look at that. Another case marking question. There was a case of love, like somebody's love. And that was in the complex predicate, it was marked instrumental, yeah, you asked me? Yeah. That, why is it instrumental? Is that what you're telling me? Because it's not, so the instrumental is also used as a commutative. So I talk with somebody or I love with somebody. But you could also use the principle, you could also use the co, which is the dative accusative in principle, but there is a strong constraint against two similar case markers in this language. They really don't like that. And you mentioned that it's very difficult to do this kind of analysis with historical material. So how may I ask you, what kind of methodology did you use? How did you determine what is possible, what is not based on historical material, what you don't have access to in your speakers? Right. So the historical stuff I looked at, what we looked at was, we tried to see whether we found examples that match these sort of criteria there in terms of, is this actually a separately controlled argument in there or not? And you did find some examples like that where you could see it was actually part of the same predication. And if you tried to analyze it as two separate predicates, you would have expected certain things to have been there, so certain arguments or certain case marking possibilities that weren't there, so that were unusual. So there was hard, but sort of it was able to be found. May I ask you the size of any historical corpus you use for that? I tried to, so I didn't use corpus for this, but all the secondary descriptions, there's a very good book by Bertolt Tickerman on these kinds of predications and the dictionaries, et cetera. So this was about, so what we were looking at is is there a stage where you cannot find like verbs? And that was not the case, right? So it wasn't about what doesn't work. What doesn't work, so it was about when does it start becoming possible? And it turned out it was always possible, okay? Now there is sort of, what has changed over time is that they have become more frequent these verb-verb complex predicates. So this is something that Hook has looked at and Hook and Pardesi, and he interpreted as grammaticalization, so this is sort of something that needs to be understood. And what has also happened since Sanskrit is that verb particles have gotten lost. Okay, so you used to be able to have particles on the verb, and that has gone. And particles often mark electricity in some way, right? So the results state, et cetera. So there seems to have been a shift from no particle verbs to more complex predicates. So there's clearly a connection, but nobody's really understood that. But that doesn't mean that, so it doesn't mean that suddenly light verbs appeared, but it seems to be more that the functional load has become more on the light verbs and the particles sort of have become more complex, and disappeared. By time goes, is things like call up? In Sanskrit, it would be like up-call, or up-call. But like the pre-verb. Yeah, the pre-verb, yes, the pre-verbs, yeah. Yeah, yeah, so I think there's a lot to be done here because one does know that zero verbs change over time, so these verbs become prepositions and complement tires, et cetera. Adjective noun verb complex predicates tend to lexicalize at least these Austrian language. And you do get more frequency here, so there's clearly some kind of trade-off. Yeah. You mentioned that the light verb determines case marking on the subject. Yeah. And I wondered whether that had to do with the event of the semantic, like the event semantic, the type of event that the light verb describes that is incompatible maybe with a semantic, so a very agentist object, or whether it's transitility, or how is it determined, because it looks like it's transitility, but at the same time it might seem to be the event semantics. Yes, so I mean, so the question is how do you understand transitivity? What does that mean exactly? Right, and if you think about it in terms of event semantics and say, okay, the light verb is sort of giving me an initiator, then it's giving you transitivity, it's giving you some kind of agency, right? So if the light verb is doing that part, then it'll say, okay, I have something to do about, to say about that. Whereas for the unaccusative, you wouldn't get that, so then the case marking works differently. Yeah, so it makes sense? Yes. Yeah. You had a question as well? Yeah, sorry, so I'm working on serial number construction so you'll find out when it's disarming. But I was wondering if you could explain, you said that the paradoxical conceptual structures are not constrained enough, and that was the main reason that you wanted to shift over to this kind of like a three-place approach. I'm coming in from a serial number construction, I feel like maybe the lexical conceptual structures might be more flexible for that purpose. So I was wondering what kind of constraints do you need to make the context application work that you get from that system that you wouldn't get? So I'm not sure that I said the lexical conceptual structures are not constrained enough. Or maybe I did. Yeah, yeah. The thing is if you, so this was the, so hang on. Yeah, so I didn't say this, this is what these guys said. So they have a paper saying, okay, but if you're trying to make predictions about what can go with what, basically if you're using LCSs and if you work with LCSs, you know that is true, you can invent all kinds of stuff, right? So they really wanted to have something more formally constrained in this guy, Patrick Kaldal, he's a formal sort of mathematical, linguistic mathematician, et cetera. So he really wanted something constrained. And this is what this blueprint's idea was. So they're thinking of a type hierarchy, one can have this kind of thing and that can be that kind of thing and that kind of thing. So what they're thinking of, I didn't say this properly, what they're thinking of in terms of LCSs, they're saying, okay, we're gonna have a certain type and it can inherit from other types or it can have parts, but not just anything goes. So you can have subparts of this, but not sort of free for all. So that's what they're trying to do. They're sort of trying to do it, you know, a type logic of what there can be in terms of stuff. That's what they want to do. What I, I'm not so sure about that. What I find interesting is to think about it in terms of sub events semantics, which I don't think will work for zero verbs, right? So that you have more of a constraint about, okay, which parts of the sub events trying to sort of plug into and do. But with zero verbs, I have no idea how to do it. So what's your idea? I don't know, because I feel like some of the things that are being described as serial verbs, I work on oceanic languages. I feel like some of them maybe are more like complex predicates. That is also always the case. Yeah. So the ones that are described from a rule and reference ground perspective is the clear, often, like, there's some kind of compounding which I would cap as the level of. Yeah, those tend to be complex predicates, I think. I think that distinction in a rule and reference ground, the nuclear ones are the ones I've been talking about. And the other ones are called what? The core layer. Yeah, okay. And then that's quite a fact. Yeah, I think there's a lot going on in that. So I'm not sure, like in different things, I think there's different types of structures, but they seem to be very flexible. Okay, good.