 So welcome to our linguistics research seminar. Our guest is Matthew Carroll Matt House, who has got a PhD from A&U. He was very national at the university. And he worked at work still, one of the most interesting and challenging field areas from in Bacchia and in Edec. He worked on a previous manuscript language called... In Gollum Bull. The Bacchia language is the YAM family, spoken in West Bacchia. That's right, yeah, yeah, yeah. No, no, only 15 kilometers. My field study is 15 kilometers from the border, though, so... Yeah, sorry, close enough. But you're getting within the Indonesian authorities. Yeah, that's right, yeah. And he is currently a new international fellow at the Surrey... Yeah, the Surrey with the Surrey morphology group. And the Surrey University of Scotland project is now distributed experiments and infection or redundancy from both a technological and a formal perspective. And here it is. All right, fantastic. Thank you, that was just very nice. And thanks, guys, for coming. I hope you're going to be keen and enjoy the talk. So, as Radosh says, I'm going to be talking about some work that I've been doing for the past two years, examining the problem of redundancy in morphology, right? Specifically within a domain called multiple exponents. Just to clarify, multiple exponents is a morphological phenomenon where you get a single or cluster of grammatical meanings, the kind of things you mark in a word, marked multiple times in a single word, but without any corresponding subsequent multiplication of that meaning. So, for instance, in this example from Batsby and Nark-Dagestanian languages, spoken in the Caucasus in northeast Georgia, the verbs agree with the absolute argument in both gender and number. So you can see here if this is gender, gender two, so we have these three gender markers here, agreeing in gender and number. The crucial, the interesting thing about it here is that there's these three markers here, right? By virtue of having three, this doesn't make it in any way more gender two or more singular, it's just a grammatical fact of Batsby, you mark it multiple times, right? This is just redundant marking. It's not the only type of multiple marking, though. In a related but distinct phenomenon, you get something called, what I'm calling distributed exponents. This is where you have multiple marking of a single category like we saw for multiple exponents, except in this case, you need all of the elements to work out really what's going on. So, this is data taken from NEN, a YAM family language, closely related to the language that Angolambu Bhadosh talked about before. And in these verbs are marked for number across a four-way system between singular, dual, plural and large plural. And it's marked with both a prefix and a suffix. But if you were to take, as you can see here, so this is clearly marked with a prefix, clearly marked with a suffix, but if I was to ask you, well, what's the exponent here of dual? What's marking dual in this one? Well, what would you say, right? Well, you say, well, it's, yeah, here, it's contrasts with singular, but if you just have YAM on its own, it's not enough, because it's syncretic with the plural. The suffix, however, well, it's also marking dual. It's different from singular and from plural, but since it's also used in the large plural, it's not enough to tell you, right? But by the combination of them two, unambiguously tells you that this is dual. So you have a level here of structural redundancy. You're marking a single category that is dual, twice in a single word, but the hearer needs to hear both elements in order to provide it. So this is clearly, there's a level of redundancy here, but this is clearly none of the individual pieces here we wanna just say, oh, this is just redundant, right? So this raises a whole bunch of questions. Some of them longer scale than others. The ones that I've been working on for the past two years and the ones that hopefully I'll convince you that I have the answer for now are these three, right? What does it mean to say that some piece of a word, what does it mean to say that some morphological formative is redundant, right? How do we define redundancy? What does it mean to say that one piece is maybe more redundant than another piece, but how do we measure redundancy and is there in any inherent structure to the logic of redundancy that can create a possibility space and how does the empirical domain map onto that, right? How do we classify redundancy? What's the typology of redundancy? And so that's what I'm gonna go through today in my time that I've got, it's a lot, and I'm gonna do it in reverse order. So I'm gonna start with a very traditional sort of canonical typology approach to describing the empirical phenomena here, talking about just a typology of multiple exponents in regards to redundancy. And then I'm gonna show you how, whilst that is very insightful, doesn't give us very much. And I'm gonna recast this typology in explicit model theoretics, so formal terms borrowing from set theory. I will walk you through this though, it's not hard stuff. And then I'm gonna show how what we can actually do is we can then, using this model, we can provide extremely explicit definitions of the typological space. We can say exactly what it means to be redundant given a set of assumptions, and I'm gonna show how we can develop some quantitative measures so that we can begin to ask much bigger questions as say, how does redundancy evolve? What is the stability? How is such system stable over time? Do we find any evidence to say that these are selected for in some way? So let's begin with a typology of redundancy. So to start, if we're typologizing over languages and we're trying to classify data, we need a basic definition, a functional definition in order to just get us going. We need to know what we're talking about, right? So here's some definitions I've got of redundancy. We might define redundancy as something like, well, functional redundancy refers to the situation where one part of a system can completely or partially compensate for the loss of another, right? If we lose a bit of the system and it still works, well, then that bit we lost was redundant somehow, right? So we can maybe think of this as a diagnostic. We can turn this into a linguistic diagnostic that if we cover up some morphine, some formative, and we can still work out what the word means then that morphological formative was redundant, right? So that's from Wang and Zhang, but in this paper by Coffrey, they give a bit more detail and they talk about overlapping functions. And that'll come in later when we start talking about set theory. Notice that these are very general definitions of redundancy. These are both from papers on genetic redundancy, and there was actually a lot of very compelling parallels between linguistic redundancy and genetic redundancy. And so what I'm gonna argue now in the next 10 minutes is that there are two different ways we can think about redundancy. We apply the metric, remove the element, and I'm gonna say that there are two different ways. First way I'm gonna call Contributional Redundancy, which we might define as, in sort of linguistic terms, as the interaction between multiple exponents and humulation, right? So there's a paper by Gabriela Cabriero and Alice Harris from 2012 where they look at multiple exponents more broadly and they talk about redundancy and that basically matches theirs. The new stuff, which is mine, is what I'm gonna call Specificational Redundancy and we'll go through this. So there are two parameters of the typology which have various logical levels which create a type, six different types of redundant exponents. So the first one is called Full Contributional Redundancy. We'll start with the parameter of Contributional Redundancy. And Full Contributional Redundancy is just simply every market contributes the same information to the word. So here we have our Batsby one. If we were to remove any of these, if we were to remove, let's see if I can do this. We'll remove this one. Well, we still know that that verb is agreeing with gender two and singular, even if I was to remove this one. It's still gender two and singular because we have none. But nothing is lost by removing any of them. They're all contributing the same. Seems fairly trivial, right? But it's not the only way. You can also get a more asymmetric relation. So what I'm calling partial Contributional Redundancy. One market makes some additional feature contribution. So here for certain verbs in Turkish and the past tense, you get multiple marking of the past. So you get, you have the stem, you have this dedicated past marker which occurs everywhere. But then you get this special first person plural agreement that only occurs in the past tense. So we know that also means past tense. So here we have multiple exponents of past tense. But if we were to lose the do, well, we would still know it's past tense. So it's redundant. But if we lose the k, well, we lose the agreement marking, right? So that's a different sort of redundancy than the other one. It's one's more redundant than the other in some way. Cool. And then we finally we get overlapping exponents. This term overlapping exponents goes all the way back to Matthew 74, I think, for the keen morphologists in the room. And this is where, well, each marker, we have multiple marking of a single category and it's wonderful example from Tom Penang language. You get second person being marked on the stem in the first suffix and in the second suffix. But each suffix also marks some additional grammatical meaning. So one is progressive when you think of it. So if we were to lose this path, well, it would be second singular. We wouldn't lose the second person information, but we would lose the progressive information. So these have an overlapping distribution. So in this way, we can see that there's redundancy. You're marking the same thing three times. That's redundant, kind of done, why do it? But none of the individual pieces are redundant. Okay, parallel to this is what I'm calling Specificational Redundancy. This one's slightly harder to see. So this is, Contributional Undersea is what you're bringing to the table. Specificational Redundancy is how much, how certain you are about what you're bringing to the table. So in the Batsby example, these y's here are gender two singular, and we know that because we have control of the e to the gender two. However, the Batsby gender paradigm is not, it's not, it's kind of a challenge. And this y could be gender two singular, could be gender three of either number, or it could be gender or seven or eight of plural. So if we didn't have that control of, there would be a huge amount of ambiguity this could be all sorts of things. But regardless, removing one doesn't reduce the uncertainty of this pattern. It's the same regardless. Might be a bit hard to see in this context, but that could give you a different example that might be a bit clearer. So this is specific, this is partial specification of gender two. So this is about certainty, but they're asymmetrical. So here in this language for Whippy, which is the Eastern Transline language and this is spoken in the Swampy fly depth that's a horrible place if people live in Southern New Guinea. And this is spoken, this is unrelated to NEN, which we saw before, but has a similar sort of system. So in this case, if you want to say, oh, I left a typo in. If you want to say I'm building three houses, this should say three houses, plural, you have, you get the same system where you have number mark, both the prefix and the suffix. The prefix here means non-singular, means either dual or plural, whilst the suffix means only plural. So if we were to lose the prefix, we're not losing any information. We know it's exactly plural. So it's redundant. If we were to lose the suffix though, we would lose most of the information about the number, but not all of it, because this means non-singular. So we still have some information, but not all of the information. So it's a redundancy in terms of how much it's specified. So this one is entirely redundant, whilst this one is partially redundant. And then finally, we have distributed exponents. This was our example from NEN at the beginning, and I've got it here again in WIPI. So if we want to say we're building two houses, Arangin, we use this A, which means non-singular, and we use this EN, suffix, which means not plural. So if it's not singular, that is if it's more than one and it's less than three, well, it has to be two. So we have this dual marking, and if we were to lose the prefix, well, we lose some information about it, or if we lose the suffix, we lose some information about it, but together they give us the whole picture. So in this case, you've got some redundancy. If you lose some, you do are losing, you're not losing all the information, but none of the individual pieces are entirely redundant. So this gives us this beautiful typological space of possibilities, right? A different way of coming across some examples and classifying it according to the way the world works, and I've treated these here as independent variables, but they intersect and they cross-cut, and I've got examples of every possible combination you could imagine, so you can have specification or redundancy interacting with contribution or redundancy. It seems fairly unrestricted on that term. So quick interim summary. All you need to remember here is that there are two types of being redundant. One in terms of its contribution, one in terms of its specification. Each of these parameters divides into three logical levels which have parallels across them. And finally, at least according to this traditional typological methods of creating nice matrices of interacting phenomena, it seems completely unrestricted. So that's sort of as far as we can get with traditional typological methods, which is why now I'm gonna shift gears on you and I'm gonna take you into the world of model theoretic linguistics. Okay, so what I'm gonna do is I'm gonna restate that typology using a model theoretic model of inflectional paradigms, a model that describes paradigms at all levels of structure simultaneously. And the next three minutes are gonna be a bit more, five maybe five minutes, I'll be fair, is gonna be a bit more hard going, but then things are light enough again. So the thing about model theoretic approaches is that they're simply, they're not generative, you're not taking a bunch of inputs and generating amounts of outputs. All it is is simply an explicit description. It's just the best of the list of statements which given some set of assumptions either match the data or they don't match the data. And it's gonna be grounded in the logic and calculus of set theory rather than making up some clever new linguistic formalism which seems to happen every day of the week. So I'll have to just start with a list of assumptions in order to make these explicit. My first assumption is that we can break words into bits, right? Formatives of identifiable, more themes are identified if you wanna use the word more theme but that is theoretically loaded so I avoid it. And this is done on distributional grounds by linguists and linguists know how to do this. We can't teach machines how to do this yet but people are working on it but linguists do this somehow and agree with each other. And we're gonna model this in terms of position formative. So in instance we might say oh it's a suffix, it's why. These correspond to morphological locusts, loci and formatives which are mutually exclusive are assumed to be in the same position. That is either in competition. Synthetic and semantic contexts which describe the distribution of formatives are represented by feature values of the type feature value. That is we might say number singular. Again values are the same feature are assumed to be mutually exclusive. That is a word is either singular or dual and it's not both, right? And then here's when things get a bit trickier. Then the basic assumption is that words are really just pairings of forms and pairings of features. That's all you need to really mean. That's all this means. So this is just say some word omega is a pairing of a set of formatives, right? It's just a whole bunch of formatives listed together as a set which we represent as capital W and it's associated with a set of feature values sigma. The feature values you might think of as a cell in a paradigm. If you used to draw in paradigms, then that's all that is. It's just a cell, it's just a cross-cutting feature. And we say that these features are associated, right? This is not like lexical meaning like if you're doing LFG or minimalism we say, well, this morphine means this. We just wanna say that these are associated with a set of meanings which describe their distributions. A concrete example will help you see exactly what I'm trying to get at here. So here is our Buttsby verb. It's just the list of all the morphines in the verb. We say this word is associated with this set of features, right? So it stands in an A relation to gender two, singular, present, tense, and an evidentiality. We can then define any subset of W as also being associated with a set of features with respect to that word. So in, for instance, in this word here we have, which is this whole pairing, we might say that one of the given Ys in this word is therefore associated with gender two and singular based on its distributions that linguists establish using traditional linguistic methods. Not too complicated, right? So this is what the kind of things we're talking about. From this basic set of definitions, we can define more complex relationships. So we can talk about leximes, right? And a lexime is just a set of words which share a certain property, right? Doesn't that seem reasonable? So a lexime lambda is simply all words with some property I, right? Maybe they all share an infinitive. Maybe that's how we decide what a lexime is. The jury's still out on exactly what a lexime is, but we can work it out. And then we can define why we need leximes is that we all know that a given formative might have different features associated with it given different words of a lexime. So for instance, a suffix might be generative or it might be dative, you know? There is, or accusative and generative, for instance. And so we can define, so it might be, single formative might be associated with multiple sets of feature values across different words. We can define this formally, but we say we might say something like, well, the a-lambda relation, right? So the association at the lexime level of the Batsby affix y that we showed before, well remember how I showed you it was syncretic with all those values. So it's a-lambda association is the set containing the set gender to singular, also the set gender three, the set gender seven plural and the set gender eight plural, right? So this is the other sort of relationship that it's in. That is all of its meanings across all the words in which it occurs. All right, that's it for assumptions. We can move on now to getting back to the fun stuff. So when we're doing our typology, we need our typological base. What is the first thing we need to establish is what counts as to be included in the typology or not, right? If we're comparing a whole bunch of languages, you need to know where to stop, what to include or not to include. And so with this very basic model that I only spent five minutes basically giving you the entire rundown, we can already start to make some incredibly precise typological statements. So here's a definition of multiple exponents. So everything that we include in our typology needs to match this definition. And all this raises that the expression of some feature value, sigma, right, in some word, omega, is an instance of multiple exponents if and only if, and I'll take you through this bit by bit, there exists two formatives within a word form W, right? So that we've got two formatives in W, where such that, that sign just means such that, that feature value occurs within I. So the feature value occurs in some abstracting I. And I is simply the intersection of the two sets of features associated with each of those formatives in the word, right? So basically all this says is for something to be multiple exponents, you need two formatives which, which are associated with it, right? Seems fairly trivial. And we can go through now and we can provide proofs that any given example is or isn't a case of multiple exponents, right? So here's our but for the example, I'm gonna go tediously go through this, it's gonna be real tedious, but it should be fun to, hopefully you can see the power of it. So here we have our definition. So our definition needs to match our example here. So here is our word, right? We've got our word, which is a set of features and a set of forms and a set of features. We might say, well, let's just take any three at random from this word. Here are our three I've chosen at random, the ones I know work, right? Each one of those is associated with at the word level, associated with the set of features gender two and singular. Since gender two and singular is the intersection of all of these elements, then we know that both gender two and singular are examples of multiple exponents here, right? So we can prove whether or not a particular example counts. Okay, so this is our base, right? So everything I said, like I said, now all of our examples need to be minimally multiple of exponents. And then from this, we can create a very careful typology where every typological, every type in our typology involves a single logical alteration to this simple definition. So let's run through, I'm gonna run through each of these. So in terms of contribution or redundancy, let's start with that. So fully full contribution or redundancy. So expression of some category sigma is an example of full contribution or redundancy if and only if it's multiple exponents, right? This bit just is our definition of multiple exponents. You can just remember it has to be multiple exponents and there's two sets of formulas have to be identical, right? That's all that says that the A omega of A is equal to the A omega of B and that's just that image. And we can go through and prove that. So this is our but to be example again. We've already proved that it's multiple exponents. Not only is it multiple exponents, but oh that's missing, it's a meager there. Sorry, A omega y1 is equal to A omega y2. They were equal to y3. Therefore, this is fully redundant, full contribution or redundancy. Formal identity. Exactly, no, not formal identity here. That the set of features associated with them are identical, not the form. The form here is, I mean it's, this example is beautiful for so many reasons but it would be nice if they weren't all the same form. That's a historical accident. Well, no, maybe not an accident but a part of the grammaticalization. But that the set of features to them are all identical, right? Because it's the A relation, not the actual form. So that's full contribution or redundancy. Partial contribution or redundancy is simply the expression of some category sigma in some word omega. It is an example of partial contribution or redundancy if it only if it's multiple exponents with the additional clarification that the set of features associated with the first formative is a subset of the set of features associated with another formative, right? So let's have a look at that. So here's our definition and so we can visualize that, right? So this is nice and simple, right? So here's our word. We need to have two formatives. This one's A, this one's B, both occurring within the word. Okay, that's simple enough. Not only that, they need to be associated with, they need to have an overlap to be multiple exponents. Here's their overlap. Not only that, but one is a subset of the other. That's how we define partial contribution or redundancy. And the visualization I think helps just to explain exactly what's going on. But we can go through the proof, right? Back to our Turkish example. So here we've got our word. Here is our word. Each of the, we'll take the two suffixes here, du and the k. This, the du suffix is just associated with a set of values past, whilst k is first plural in past, right? Past is the intersection of these two sets of features here is past tense, right? So this is multiple exponents of past tense, given our definition of multiple exponents. Cool, right? It counts in our typology, but not only that, but well, past tense here is a subset of this set here, fast plural tense, right? So this is how we know that this is what's called partial, contribution or redundancy. And we're getting, I mean, we don't even have to do all of this tedious proving, right? We just could map it out like this. Here's our word. Here's our two formatives. We write the feature values and they exist in this nice little then subset here. Simple. And then finally overlapping exponents. Hopefully you should see where this is going. And by the name of it, you'll know that this is of course the expression of sigma in W, expression of some feature value in some word is an example of overlapping exponents. If it is multiple exponents, right? The first two lines, the crucial part. And the intersection of those, the sets of features associated with those two formatives is a subset of both of them, right? That is that they exist in this relation. They have an intersection within the word. However, each of them is also associated with some other feature value. And we can go through any example that we find and check whether or not, if it's appropriately coded, check whether or not it counts as distributed overlapping exponents, sorry. So here we have our tontonic example. We have our word here, tan pa t, which is associated with second person singular progressive. Each of the individual formatives, so pa is associated with second and progressive, t with second and singular. The intersection of these two is second person. Therefore according to our definition of multiple exponents, the expression of second person here is multiple exponents, so counts. Not only that, but second person is a proper subset of these two sets here, right? So neither one is a subset of each other. This four is overlapping exponents. And it looks like this and this is why it's naturally called overlapping exponents, right? The two intersect here. Okay, now let me move on to specification or redundancy. And I'm about to get tedious again on you. To handle specification or redundancy, remember, contribution or redundancy was what we're adding, right? Where a specification or redundancy is how sure we are about what we're adding. And so we're gonna need a new type of relation in order to get to that. And I'm gonna call this the A-sigma relation. And we might define it formally as follows. Informally it simply means A-sigma of A is simply all the values of a feature across the entire set of its A-lambda relation. So remember the A-lambda relationship of the butts be why a fix is the set containing gender two singular. The set gender three, the set gender seven plural and the set gender eight plural. So we can derive from this two A-sigma relations. The first A-sigma relation is that A-sigma number relation, right? Which is how certain we are about what number of value this has. And we don't, we're not very certain here. It could be singular or it could be plural, right? We don't know. So that's a set and we have an A-gender relation which is either gender two, gender three, gender seven or gender eight. And then we can see that each of these now will participate in a different kind of a relationship. So you should be able to see where I'm going with this. I hope that, so now we're talking about specification or redundancy. The expression of some category, sigma in some word, omega is the example of full specification or redundancy. If and only if it is multiple exponents, right? So first we need to check that it's multiple exponents. And then next, that the A-sigma relationships are identical, right? The feature sets of how certain we are also are identical. So let's go through our Buttsby example. We've already demonstrated this is multiple exponents so we don't need to do that. But let's look at whether or not what kind of specification or redundancy is going on here. So the A-land or relationship of this Y, we've already established that. So in terms of the A-gender association, we've established that. So for each one of them, this is it here, right? Since they're all equal, then this is equally specificationally redundant, right? In terms of gender. Similarly, in terms of number, right? This will be more clear once we see some more asymmetric ones, such as partial specification or redundancy. In this case, partial specification or redundancy is simply multiple exponents in which the set of A-sigma relationships of one formative is a subset of another. That just says that, which we might visualize like this, right? So here we have our word. We have two formatives with an intersection being multiple exponents. However, one of those formatives, in terms of its A-sigma relationship, it could mean some other value which is not relevant for this word, whilst the other one only is a subset of that. So remember our wippy example where the prefix here we said met non-singular and the suffix met plural. Well, the two A-lander relationships for these, the prefix means either third plural or third dual, whilst the suffix means could mean plural. We can generate our A-sigma relations from then. So the prefix is either, it means either dual or plural in terms of what its number, right? So the number for this one is either dual or plural, whilst the number for the suffix is always plural. We can, I won't go through and generate that this is multiple exponents, but you should be able to see that anyway. And crucially, therefore, in terms of the A-sigma relation, one is a subset of the other, right? We could visualize that like this, right? Here's our word. The prefix can either mean dual or plural whilst the suffix means dual, right? Plural, sorry, so it's clear. And then finally, we have distributed exponents, which is, again, which finally, distributed exponents is multiple exponents except where the intersection of this time of the A-sigma relations is a proper subset of both. So it looks like this. And so we can go through the proof for our weakings. Remember our two, right? So we had a prefix here, which is A-lander is either third dual or third plural. The suffix is dual or singular. We generate the A-sigmas. So the prefix means either dual or plural. The suffix means either dual or singular. And these then intersect at dual, whereas both of them also have an A-sigma that's different, right? We could visualize that like this. So you can see here how the, hopefully you can see the parallels between these two types of redundancy and how they are similar yet different. So what we've done is we've taken our traditional typological terms, given ourselves a set of assumptions that then people can disagree with or agree with and we can argue about those. They come with a clear, very precise terminology. So we have to find our morphological base. We can see here that for any given formative it's participating in an A-relation. For every feature value that's part of its A-W relation, A-omega relation, sorry, there is a corresponding A-sigma relation and that these participate, that these show distinct logical types which correspond to basic logic and set theory, which classify the entire typological space. So this is not in any ways any different to what we saw at the beginning, but with a much more precise understanding of how they work. So hopefully you can see the value in that. So to summarize, formally we can think of redundancy as operating over two distinct relations, A-omega and A-sigma. Contribution and uncertainty. And a given formative participates in both of these relations, in all of these relations in fact at all times. It's what we're tells about us and how certain we are about that. And but these two are clearly, you can see how they're linked I hope, but they're modeled here as independent variables. Okay, here's where we're going with this, right? Now that we have an explicit definition of a typological space, we can start to hopefully you can see how far we've come, right? So at the beginning, we simply said, oh well redundancy is the situation where one part of the system can compensate for the loss of another. And I had this fun diagnostic where I was covering up bits on the board. That's good. I mean, that gets us a lot away, right? Gets us into the data. But it's not very precise. Now what we can do is we can be extremely precise and we now can define exactly what I'm saying when I'm saying redundancy. So here is my definition of contributional redundancy. So a formative A in a word is contributionally redundant. If the set of features associated with A, given that word is a subset of the set of features of all the set of the union of all other formatives in that word, right? So if you have, here's all of the other formatives in that word, all the features associated with those formatives. If this one is a subset of it, then it's the redundant one, right? So whichever is a subset is what's redundant. And then we can easily turn this into a very, kind of very intuitively turns into a quantitative measure. We simply take a ratio. A ratio of the redundant formatives to redundant feature values of a formative to the actual feature values it has. So what we know before over what we know after. And we can go through this. So here's our thing in the case of our Buttsby example, each one, it had two redundant feature values out of two redundant feature values, right? So it has two out of two out of two, each one is one. So each one is redundant. We already knew that, we established that. We can measure it now. In the case of our partial redundancy, well, the one that's the subset here, it has one feature value out of one feature value, right? One of them is redundant, one of them is covered by the other. So it's one out of one redundant. Whilst the suffix here that meant both past, first and plural, well, only one third of it is redundant, right? One third out of all of its functions are covered elsewhere. Seems fairly intuitively simple. And again, with our overlapping exponents, well, each one is half redundant. Half of it is covered by some other element, but the other half is not. Seems fairly trivial. Things get a bit more complicated when we're talking about specification of redundancy, which is in many ways the exact opposite. I think this is where it's interesting because you'll see that for specification of redundancy, well, it's the formative A is considered redundant. If the set of A-sigma relations, its uncertainty set of features is a super set of the intersection of all other feature values in the word, right? So in this case, it's the big one that's redundant, right? The one that's unsure between dual and plural that occurs with the plural, which is the redundant one, right? It's the more redundant. So we model this as how certain we are after we've measured this as what we know after, after we've included everything over what we knew before, right? So I'll run this through how this works. So if we wanted to measure the specification of redundancy with our batsby wire affix in regards to gender, well, we might apply this formula. We'll say, well, the gender, A-sigma relation for gender, for the wire suffix is this set here, which we can then plug into our formula. Now we need to know what the intersection of all of the other formatives are in terms of gender. So, well, the intersection of this one and this one is just, because they're identical, remember, is just the same again. So we put it in there. We put it in there. We then solve this simple intersection of these two identical sets is therefore the same. So it's four over four if we just count them up. So this is, that's how we know that that's specificationally one redundant, right? We've proved that. In the case of, I'm gonna skip over now and go straight to distributed exponents, things are slightly more complicated. Remember, this is with our non-singular and our non-plural, right? Here are our two sets of A-sigma relations for those. We might plug, so if we're talking about the specification of redundancy of the prefix, so we put the prefix in there. We now need to work out what the intersection of all other formatives in the word are since there's only one other one that's relevant. It just goes in and it goes in there and then that goes in there. And so now we need to solve this. What's the intersection of dual and singular and dual and plural? Well, it's just dual. So therefore this prefix again is half redundant. But the definition comes out with the same meaning but the definition is inverted in this case, right? That's really the crucial point. And then we can go through, and now we can provide detailed quantitative measures of any given formative in any example when it's coded up properly. So I mean, this traditional typological qualitative typology of saying, well, this language is a little bit different to this. I mean, that's useful. But now we can actually say, well, not only is it a little bit different, well, it's this much different, right? And we can, I'm getting ahead of myself, so let me go through a quick summary. So we've seen, hopefully, I'll just finish up that, well, structural redundancy is clearly definitional here, right? We have these phenomena which are just marking a same category multiple times for no good reason. But there are two different ways in which they display gradients in terms of what they're bringing and how certain we can be about them. I mean, that's interesting on its own. Formal modeling shows that these two types of redundancy are operating on distinct but simultaneous functions, right? That these are occurring simultaneously. And then how we measure these redundancy on these two parameters are equal. Those two formula or surgery, they seem basically the same, but in fact, conceptually, they were just the opposite of each other. So I think that these kind of statements really allow us to show that we understand what's going on here. And for me, this is really excited because my big question coming into this is why the hell would anybody do this? If you were designing a language, why would you do, say, if I want to build two houses, I would have to say, I want to build more than one, but less than three houses? It just is not a very good way to speak a language, right? And most languages don't do it that way. So how does such a system evolve? How do we get there? Well, I don't have the answer for that. I spent two years just working out this. So now that we have these as independent variables that we can operationalize, we can look at correlations between grammar and cognition, you know? Are these independent or independent things? Do we see stability of this over time? Do we see a reduction of it? Do we see redundancy correlated with rates of change or availability to mutations? So if you guys have ever, every linguistic theory you read says something about an economy of expression or monotonicity or marketness or, you know, the more you use it, the simpler it gets. Well, what does that mean, you know? In genetics, redundancy typically is just, if something's not performing a function, then it's not kept around. It's easy to change. It's free to change, right? Which is not how linguists think of it. But now we can actually go in and we can measure these things. And maybe it has some other benefit. Maybe it helps with memory or cognition or other things. But so these are the things that I'm gonna do next, really, right? Hopefully I can get some money to do that. So here's some boring references and thanks guys and thanks to my funders. Thank you all for listening. Yeah, that was a whirlwind. Yeah. Yeah, I mean, that's a huge question too, right? Because agreement is essentially redundancy when they're obligatory. I thought I would start small, work my way up. I mean, the interesting thing, the nice thing about morphology is that typically it's obligatory. So you're not getting any of, that's not playing a problem. So often, I mean, you know, often what's called agreement is really not agreement. It's not like where you have a, you know, agreement, the really strictest examples are agreement where you've got like an adjective agreement with the noun and you have to have both of them. But typically when you have verb agreement marking up, arguments, arguments are optional often. And then their presence is also determining some other function often, information structure or stuff like that. So there's a whole then presence or not presence of these other things. There's a whole not a variable. The other question could be about long range dependencies, right? Do you see, so, so I'm not familiar, I'm sure if you're familiar with sort of conditional entropy and so if you've got a level of uncertainty somewhere, maybe some other piece of information is resolving that for you. And so you can handle a bit of uncertainty in the system. And one question I'd really love to ask eventually is, is there, I mean, is that a thing? And is there any range dependencies, right? And how far, how big a cause can you get before that collapses? So the question is no, but it'd be cool to do eventually, yeah. And it's a really unresolved question. Particularly like this Batsby example, you get the three times in the verb, but then every single element, I don't know if you know much about Corkay, like Naktakastani and languages, like every word in this sentence is Mark's agreement, but here's the absolute argument, which is, yeah. Yeah. Oh, yeah, in the Batsby example, yeah. Yeah, they are here because the group, so in my typology, so I mean, you could have an additional parameter of the typology whether or not the forms are identical, but I don't think the forms doesn't make a difference for redundancy, right? But that is relevant for multiple exponents, I suppose. Yeah. Okay, you said you can leave one off, yeah. You can't, no. You can't leave any off, it's just ungrammatical. So in this case, so typically what you've got here, this is, I think, because it's got the evidentiality marker comes with its own agreement marker, so normally you would just have, I think, like that, maybe. Yeah, sure, but you would still have the two Ys, right? So that would be just like, she is ripping the dress and I saw her. So practically, what's with this thing, you know? That, historically, that's exactly right because there were auxiliaries which marked this and they all marked agreement and then they've all been crammed together. That's how you get this system and that's out of this full redundancy, that's how. Okay, clear. Yeah, exactly. So Alice, this is from Alice Harris's book and she has a whole chapter on how you end up with this. To be honest, these are the ones, I mean, I've had this conversation with Alice. This should be a little bit literally done. Yeah, exactly, yeah. Just like, why bother, yeah. That would be perfect. Yeah. In Batsby. In Batsby. Yeah, in Batsby. Basically. In Batsby. Yeah, or like, if it's not the evidential marker, it's like that, so. But yeah, similar, Andy, Archie. No, I think that's as big as they get. I think that's as big as it gets, yeah. I don't know though, I'm not. But yeah, I mean, it's such a beautiful example because not only that, you've got all of this complex syncretism going on and this crazy gender system and, yeah. But it's partial redundancy of a sort. Yeah, that's right. Because it's, the idea is that all of them are redundant on some level and the general assumption. So, I mean, I guess the motivation for really laying this out is if you read, except for, you know, this is the current book. The typical assumption when morphologists are talking about multiple exponents, they're just like, it's a bit like agreement. They just think, oh, you just, it's just you're doing the same thing again and again and again. But actually, in practice, it's quite scalar and gradient. And I think that's the way in, right? This is how we know this is gonna be the solution for understanding how this is stable over time. Yeah. Yeah, well, yeah, exactly. I mean, that's still our answer question. But we know these systems are quite stable, right? You know, agreement systems in the European go back all the way. Like, so, but the question is, and at least the psycholinguistic evidence within a word has shown no benefits to memory or recall or processing of by having multiple exponents. In fact, they make it worse. And what's more important is whether or not it's a suffix or a prefix. But so, whilst psycholinguistic experiments in agreement, I think have shown to increase, like make it quite faster for the hero. So maybe there's a interesting divide there between, yeah, beyond the word level. But, yeah. That's understandable. Yeah, so. It's not the word for syntax. Yeah, yeah, yeah. Yeah, totally. So if you would like to read my PhD thesis, I have a whole section on this. So in Engolambul, for instance, you get pronouns mark person and agreement marks number. So it's not really the same sort of thing, but you've got, it's just a, I mean, that's a slight simplification, but it's a sort of rather interesting thing where the sort of the work of, you typically expect agreement and pronouns to be doing the same sort of thing. But in this case, you've got this split between the system. So, yeah, yeah, mm-hmm, yeah, sure. There is extra syntactic mark, which, I mean, this extra bit is what seems to be redundant. Exactly, yeah. What seems to be redundant. It seems to be like the combination of morphology of syntax. Yeah, no doubt. And actually, and it's quite complex, if you want to say something, a house or Michael saw the house or he saw the house, you get he saw the house. Yeah. About that. Sure, yeah, yeah. That way, if you want to say, John saw Michael in a house, John saw him in a shop, Michael. Yeah. So, it's exactly, it's a bit like, it's a bit like the Turkish example in some ways. I can go, in which you have a certain amount of overlap, but also a certain amount of distinction there. And in this case, it's not just here, this is conditional on the particular verb. There's not all verbs do this in Turkish. There it's conditional on intimacy. And so, you actually find a whole bunch of other condition with conditions on these sort of. It's conditional, it's a relative. Yeah. But all relative conditions. Yeah. Distinction of animosity, distinction of type of relative versus upon relative. Yeah, yeah, exactly. So, there's just conditions upon conditions sometimes. Yeah. But there's no reason why fundamentally the methods that I've demonstrated here today can't be used to tie up, to untie these exact same complications, right? So. With some more of the method of subsystematic distinction. Yeah, that's right, yeah. So, the first step is learning out how to be really precise about describing it because it's easy to say, well, I mean, and what's typically happened in the literature on this sort of stuff. Is it like, well, there's some redundancy here, or this is just master class. Yeah, absolutely, really not the set. Yeah. But people haven't, exactly, yeah. And it's, I think it's a real problem. I mean, yeah. How do you explain it? Seems, I always say it seems dumb. I unfortunately don't, but I'm sorry. But you can go on my website and get stuff if you like, which is matthewjcaroll.com. There you go, yeah, cool. Yeah, nothing. Thanks for coming.