 Yeah, so I was actually on my way here. I worked so hard on this Beamer presentation. In fact, I even got the aspect ratio, right? But it was like when Star Wars were like Luke's going down the trench and like, you know, he turns off his targeting computer and like, what are you doing? Like, what are you doing? I don't know. I just had the feeling I shouldn't do a powerpoint. So I'm just going to talk about what I'm doing. If you want the charts, if you want any of these specifics, I will send it to you if you email me, but I think it actually might make more sense if I just explain it to you this way. Okay, so quantifier scope is all fun and games. What do I mean by that? So in case you don't know what quantifier scope is or you already forgot, there are things in languages called quantifiers. There are universal quantifiers. Every, each, all existential quantifiers. A, sum, A, one or something like that. There are many, few, lots of different things. They have specific semantic properties, but the interesting thing is when you have multiple quantifiers in the same sentences, sometimes you have ambiguity. So for example, if you have a sentence like, every arrow hit a target, every and A are both quantifiers, and you can have two interpretations to that sentence, right? So you can have, let's say we're all firing arrows at each one of us has individual targets, and all of our arrows are hitting. That's one reading which that's acceptable. But every target hit, excuse me, every arrow hit a target could also mean everything is hitting one particular target, that one right there. So that of course is quantifier scope ambiguity. So there are a couple interesting things with that. In English, a sentence like every arrow hit a target, that can be ambiguous. But if you passivize that to make a target was hit by every arrow, that really forces you into the interpretation where there's only one target and every arrow is hitting it. So that's the English situation, and people have debated about this for decades. The other fact that needs to be known is there are some languages called scrambling languages, which basically have free word order. And the interesting thing about scrambling languages is that they always tend to have surface scope order and nothing else. So in a scrambling language, every arrow hit a target that would force the interpretation where it's not necessarily one target. Obviously that's logically consistent, but that's not the interpretation that you could get forced there. So those are the two things. So you have this disparity in English, and you have a forced reading in a scrambling language. So my approach to it, there are a couple problems when using traditional syntactic tools for this. First off, the quantifier scope is very sensitive to linear order. So if any of you guys were at Noam's talk yesterday or any Noam's talk basically ever, he talked about the fact that language is not supposed to be sensitive to linear order. It's sensitive to hierarchical structure, but linear order is not something we expect a narrow language faculty to be paying attention to. But quantifier scope is sort of sensitive to this. So that seems to be whatever might be going on with quantifier scope might be something different from the narrow language faculty. In the same way, people can also, if you look at any quantifier scope sentence for long enough, you will get every single reading. It just happens. It happens in every syntax class. Everyone knows this. If you stare at it, you'll get the reading. So what I originally planned on doing with this basic account is basically wording quantifier scope in terms of, I don't like the word pragmatics, but it's basically that, pragmatics. So the tool I used for this is what's called game theory. It's common in economics. It's common in decision theory. And game theory is actually exactly what it sounds like. It was originally developed for analyzing games, Coker, like that. You have players in the game. Each player has different strategies or decisions they can choose. And, you know, everyone gets a payoff based on what strategies everyone chose. So you can have something like Paper Scissors Rock or Tic Tac Toe. All of those can be analyzed game theoretically. You can give them all payoffs, win, lose, draw, something like that. So basically I model language or quantifier scope as the same thing. So you have players, you have a speaker, you have a hearer. The speaker has, the goal of the game is basically to communicate the proper scope interpretation. And the strategies that each player has. So the speaker, depending on what language he speaks, he can use active sentences, passive sentences, clefs, scrambling structure. You know, you can move things around into different languages. But different languages will have different strategies you can have. And the goal of the hearer is he hears what the speaker says and tries to determine what the intended meaning of that is. So he can try and interpret it with surface scope or inverse scope, et cetera, et cetera. So long story short is by model of basically pacifization, clefting all the things that are traditionally called transformations as being costly in some way because, you know, they're more difficult to process or they're more marked in some general sense. We can debate on the specifics, but that's actually all the model needs, some kind of input. And what you could do is say, well, put it in intuitive terms, why is it that a pacifization, a passive sentence in English should only have one interpretation? Well, it's because if pacifization is something marked, when a hearer hears it, basically they're inferring that, oh, the speaker is doing this for a particular reason. And that particular reason usually is to reorder the constituents in the order that allows them to interpret surface scope, which is a lot easier to interpret, more intuitive. So that's the intuition behind English. Now, if you do the game theoretics, again, you can email me for the charts. There's so many of them, they look really smart, but they're not really complicated. But basically one of the things you get from it is the fact that in languages like English, which are syntactically rigid, you're going to have ambiguity arising because speakers don't have a perfect choice for everything. So there are some sentences that are going to be ambiguous because we can't just move things around, so you have to interpret a particular sentence as having multiple interpretations. Now in a language like Persian or Chinese or Japanese where you can move constituents around easier, gradually the convention arises that you should just interpret everything in surface scope. The speaker should put it in surface scope interpretation the way they intend for it to be interpreted, and here are always just the students at that surface scope. That's sort of the game theoretic term that's called a shelling point. But basically in all languages where you can move things around you can basically interpret them in surface interpretation. So the interesting thing about this is this is not a parametric account between different languages. Really it's construction specific. And what I mean by that is there are some constructions in English that are syntactically rigid, and there are some constructions that are flexible. Same thing in a language like Persian. Normal word order might be flexible, but there are some things like negation that aren't flexible. You basically have to have it always in the same place. So the idea here is that it's not necessarily like in different languages you have different parameter settings. What the model would predict is basically in specific constructions you have scope that's based on whether you can move things around or not. It's relatively simple and basically that's what happens, right? So for example in Persian you always have surface scope unless we're talking about like negation scoping with an object or something like that where the negation always has to be in one place, you can't move it around. In that situation you actually have ambiguity for a reason. So the correlation, the takeaway, empirical correlation is just syntactic rigidity causes ambiguity. And this actually has been noticed in the literature before. Actually, Ryan sent me an article from, it was bobbled up, right? Noticing the same thing. Of course he's coming from a more traditional generative perspective and he has a totally different account that has to deal with this. But the empirical generalization I think pretty much strongly holds that is in specific constructions that are rigid you have ambiguity when they're flexible, they're not. So right now me, me myself, Ryan, GMP, I saw him a second ago, there you are, and Robert are working sort of on a project that sort of has two parts right now. Now the first one is going to be experimental and that is what we want to do is actually test the sort of empirical predictions like is it actually the case that we always have ambiguity where you have rigidity? So right now we're looking at, it's probably going to be on Mechanical Turk, we're going to have five different languages, English, Persians. We would think about one romance language having decided Japanese and let's see the other one, Chinese. Yeah, I thought I said that already. But yeah, five languages and the idea is so it's going to be multi-factorial, there's going to be different languages, there are going to be different constructions, but we don't want either of those to have an effect. Our prediction would be that whether the specific constructions are rigid or not is actually the important factor. And it doesn't necessarily differ by language or construction per se, but sort of if the construction is rigid or not. So that's our empirical side of that. So we're going to do that and hopefully we'll find good results. But the second part of that is an expansion of the game theoretic again. That is I basically use the formalism of game theory that's 30, 40 years old or something like that. So we're going to throw some Bayesian stuff on it and just to make it look good. We're hoping to move to a more, I don't know, advanced analysis of that. So that's our goals I guess. Did I miss anything? Should I say something else? I think that was the most I did. Really? I have so much time. We're going to sign it, we're going to do a speech. Oh, well see, I'm just thinking about French because French is more consistently rigid and we only have a bit of English, isn't it? Robert speaks English, so. Hello. That doesn't, just because it's easier doesn't, I don't know. I'll have this later. So, I don't know, does anyone have any comments or I guess that's the end of the presentation. It's about five or so minutes for questions. Okay, great. Yeah. Well, why not parameters? It seems to me, you know, you have parameters there. I mean, so with all these parameters, why are you rejecting them? We don't need parameters. Well, it depends on your definition of parameters, I'll just put it this way. There's language that, well there are two alternatives, language-specific parameters wouldn't work because what we're arguing is that specific constructions are the things that condition different scope interpretations and that's actually how it is. So, if you have parameters, you basically have to have a parameter for each specific construction and that would be very theoretically unencumbering, whatever. So, I think it'd be easier just to, well, our analysis has the advantage of it actually motivates what's going on and it does account for all the differences and we don't necessarily need to say, oh, these 40 different distinctions are 40 different parameters. So, that's the advantage of, yes. Also, as a reply to the Massimo, you mentioned Mandarin, but I don't believe Mandarin is a scrambling language. It does have that rigid scope thing you're talking about and it does have topicalization but that's not the same thing as scrambling. So, yeah, that's a good point to bring up. I use the word scrambling just to mean, well, the paper just means flexible word because for our account, you could have a different account of what actually motivates them and we're not trying to make syntactic claims about it. All that matters is you can move things around. You might call it discourse configurationality. It doesn't actually matter for our account. So, yeah, you're right on the technical sense. It's just I'm lazy so I use the word scrambling. No, but that makes sense because then, when you're talking about specific constructions, it actually captures that generalization better. At least in that sense. So, yeah, just to be clear, I'm not, oh, and the other thing I should say is the theory is not necessarily a theory of syntax in that I'm not saying why languages have particular constructions. I'm just saying, given that languages have particular constructions, how do they cause particular scope interpretations to come about? So, that's it. Yeah. I have another question. We online have discussed this earlier. Of course. What about payoffs? How do we introduce payoffs in this? So, the payoffs are pretty simple. You get a good big payoff if you correctly communicate both the speaker and the hearer, and that's the point. And the only other part of the payoffs is deductions if you use a market instruction or if you have to use inverse scope. So, that's it. So, yeah, that's all you need. And that derives basically everything I just mentioned. And we're trying to, as I mentioned, we're trying to do things in a more Bayesian way where we might just have initial probabilities and not have to make reference to, you know, the decrements for pacifization or something. Any strategies? What's that? Yeah, yeah, yeah. Something like that. Possibly even something evolutionary, but that's just because we want to. Sounds smart. Any other comments, sir? Yeah. Have you ever noticed that you have the same name as Boopstar Walker? I noticed that. No, no relation. Any other questions for Luke? Alright, so let's thank him.