 Christmas time and ends up but somehow I ended up in two by mistakes and I have one at the office and one at home. So it actually works out pretty well. So my talk, so my talk is titled language as synesthesia. I guess it's a little bit more theoretical and a lot more linguistic than you know some of the other things. So I sort of made it so it's successful to those of you who don't have a huge formal background if anyone else is interested you can ask. I guess Shannon has sort of heard some of this before because it's yeah I'm in prelim so but it sounds better now synesthesia that's everyone loves that. So language is synesthesia. So just a brief like construal of the field up to this point of linguistics right. So classic generative grammar began as sort of a formal analysis of language working in sort of these phrase structure rules that were sort of like a post system right. So you have a formal system that generates you know structures that are accessible you know syntactic structures and this is where the generativity of language comes from or whatever. So there's a modular view of language and the goal implicit in this is to understand what universal grammar is. Universal grammar meaning whatever traits exist in language that are not built you know from other cognitive faculties. So more recently there's an often quoted article by Halzer Chomsky and Fitch on language acquisition or language evolution excuse me talking about the acquisition primate but which coincide term the faculty of language in the broad sense and that means everything about cognitive life that plays into language as opposed to the faculty of language in the narrow sense which is whatever that you know universal grammar is whatever that you know little piece that makes human language distinct for them it's merged but we'll talk about that a little later. So the field during the 70s and 80s of course you had this proliferation of constraints and stipulations and you know syntax discovered all these little things presumably in universal grammar that made sort of syntactic theorizing a little messy because you had all these different stipulations you know things that weren't necessarily based on any other factor. So during the 90s the minimalist programmer rose and the idea behind it was taking the sort of syntactic traits of language and of course explaining them in terms of either external factors external like the interface different interfaces like prosody or phenology or semantics or what Chomsky 2005 calls third factor and third factor is meant to be so there's nature and there's nurture and there's third factor third factor supposed to be things like laws of form you know the sort of well-formatness in general or sometimes complex interactions between the other two factors. So UG in this idea is should be minimal it should be something that's highly accessible as opposed to just coining a bunch of formal traits you think are in UG. Now as a parallel to that phenology developed in a somewhat different way of course classically phenology was sort of rule-based in the same way you had the sound pattern of English and all these kind of attempts that made you know more or less rule-based analysis of language but in the early 1990s this changed Paul Smolensky sort of came to the field in indirect an indirect way and wrote he was sort of coming from a connectionist background or you know his general idea is that you don't need these kind of rules what you can do is you know to account for phonological variation is just have constraints and you know the constraints take care of themselves in a neural net you don't actually have to pause at any of this UG stuff you don't actually need it you don't need you know different rules you can just have optimality and that's that of course gave roasts the op-up optimality theory which is pretty common in phenology now. So originally the idea was like the the constraints that define phonology are sort of in universal grammar in some ways but most recently people have actually tried to word those constraints in terms of explicitly in terms of like external effects so for example there are phonological constraints like it you know it's hard to say voiceless sounds and voiced sounds in sequence it's hard to modulate your voice back and voice voicing not voicing voicing not voicing so these things fall out from some kind of physicality about how the mouth is constructed and presumably all all of these constraints aren't necessarily in universal grammar they fall out from you know the actual reality of speech or something like that and the goal is sort of that you know we don't want phonology to exist as a sort of thing as itself that is all of the traits of phonology actually come from you know these external factors. Now for some reasons this minimalism isn't as possible in traditional generative grammar now minimalism is the goal but there are some I guess sort of holdovers that we have from sort of an earlier time traditionally there's the idea that the syntactic engine produces strings or you know structures and they're interpreted on two interfaces phonological interface and the semantic interface so the syntax now this is sort of a problem because the the syntactic engine in a way is sort of blind to these interfaces they come you know after syntax does its thing so what if you want to like have merge or you want to have syntax you know have these truly minimal structures that you know only work in terms of external constraints merge sort of has to have the answers written on the back of its hand it has to know I can put this here I can put that there and the way of implementing this is sort of by putting all the answers in the lexicon that as you say you know for example English is a VO language because there is some kind of head feature you know abstract head feature that always wants you know you know an object on this particular side or something like that so it's I think it's kind of cheating frankly but anyway I'll sort of show you my alternative so the minimalist ideal and syntax regardless is a theory where the narrow language faculty is something small maybe even nothing maybe something that is sort of epiphenomenal and the language faculty can see all interface constraints simultaneously so you it is a what it can look at phonology it can look at semantics and we'll talk about that in a bit and the actual mechanism of the narrow language faculty is at home in general cognitive architecture so all the complexities if you want a truly minimal system you want the complexities of language to be dealt with in terms of other cognitive apparatuses or whatever however whatever the plural that word you want to have and of course it has to be interesting because otherwise I wouldn't be interested so synesthesia so I assume most people you know know what synesthesia is but in case you don't it's a kind of mental condition where you might see a smell or like smell a shape you sort of it's a crossing of different media of sensory you know in your sensory motor system so you know a lot of people see colors and numbers and they can whenever they see a number that it appears as some particular color something like that so when it's some extreme situations whole mental faculty seem to merge with each other so take the example of Daniel Tamet he's a actually an autistic savant he sort of became famous when the BBC did a document around him a couple maybe a decade ago or something but he's capable of doing these in crazy calculations you just give him numbers and it'll divide them multiply them whatever you want he can count pi to like ten thousand digits or something and he actually reports this as being sort of this is a kind of synesthesia most autistic savants can't report this but he's very high functioning so he can do this and he says what happens when he's he is doing math is that he sees each number as a kind of object with like a color a shape and when he's doing math with those are those you know numbers they physically move together and you know they combine to make this new number so what seems to happen is that I put too many words on the slide because you can't see him but what seems to happen is his spatial reasoning you know he's the human ability for spatial reasoning which by all account is pretty good has been sort of co-opted to do this other thing like numeracy so that that's sort of what I'm getting at here so when I say language is synesthesia I'm saying that language results not from a new operation so not something like merge or set forming formation like hauser-chompskin pitch proposed but as a kind of synesthesia between two interfaces that is the motor system the thing that you pronounce things that has all the prosodic phonological constraints because you're externalizing it and the conceptual intentional system that really just means semantic semantics your semantic processing stuff like this so in other animals these are distinct cognitive systems that have evolved and they do their own things but humans have this distinct ability to apply categories of one system onto the other in back so while most of the conceptual intentional system is non-conscious if it's mixed with the motor system in this way effectively what happens is that humans in by language by this kind of synesthesia we can externalize these interior thoughts in a way not just to other people but sort of to ourselves like they make them aware to our conscious mind so the claim just in general is like humans are basically high-functioning autistic savant apes okay instead of smelling colors our motor system spills and you know spills out our ideas and our ideas you know can be processed you know in our sensory motor system and like synesthesia you don't have a choice right I think we were talking about a couple weeks ago you can't not interpret English when you hear it you can't not interpret whatever language you hear just has to happen same thing with people who have synesthesia if you see a five and you perceive fives as red you're going to see five five seems like a very red number to me I don't know I don't have synesthesia but so there's no need for a formal like a narrow language faculty per se you don't need an extra operation it's so it's not so much minimalism it's sort of nihilism you don't have a new operation you just have different faculties merging so but of course synesthesia or whatever you know your brain has a sort of trial to overcome in a way that is the motor system in the conceptual intentional system are formally different so their differences are immediate and presumably their differences mediated by you know general cognitive architecture I'll get into that similar to optimality theory and phenology so just in general you know when you're looking at semantic structure or you're when you're looking at the structures it's generated by merge I don't necessarily yield much credence to that but when you're thinking about this you know the structure of semantics is more or less binary and it can go infinitely deep so you can have sentences and sentences sentences that's totally fine even when you look at distributed morphology or cartography or something like this this is the kind of structure you find very deep structures with different meanings and you know so you have stuff like this so semantic structure tends to be binary and very deep but when you actually externalize things it has to be a little different that is when you externalize things the formal structure has to be bushy right so you can't go infinitely deep you can't go very wide but you have for example the U is an utterance IP is intonational phrase the Phi is like a phonological phrase these are all different modulations that you have an actual speech and you have to put them in not necessarily this infinitely embedded situation but you have to put them sort of you know right next to each other the equal situation so what the language faculty has to do is it takes the two formal structures that these different systems have and it has to sort of optimize given the constraints above so you can see a lot of this what you have is a kind of matching or grafting in this so common example is extraposition in different language this is is German of course so the sentences I feel this in Vasili but I want to know what love is so the structure on the left is actually consistent with normal German word order so you usually put the objects right before the verb so the verb is this and here the object is Vasili best you would you'd expect it to be like on the left but in reality this when the object is something that has to something phonologically really heavy some kind of CP some kind of phrase but actually has to happen it has it has to appear at the end of the sentence and if you think about this in terms of phonological constraints it actually makes lots of sense really you're just trying to graft particular phrases on the particular particular phonological phrases and sometimes that requires you to move things around from the order you might think is normal it actually happens in English as well so we have usually put adjectives before the noun the excited man but if you want to say excited for the future that's you know its own little phrase in itself you can't say the excited for the future man that doesn't really sound right so you say the man excited for the future and those are two distinct phonological phrases and this is something that happens of course all the time it's you know don't even notice it so now if for example language is doing this and just optimizing with two different structures in phenology of course when you look at language differences the differences is what languages prioritize what constraints so presumably if that is true in phenology should be true in syntactic variation because ultimately we're just motivating it in terms of phenology so traditionally children were thought of as learning syntactic parameters when they're children so you learn if your language has verbs before objects or objects before verbs or something like this but if the traits of syntax fall out from prosody or some kind of interface constraint string the syntactic parameters shouldn't really be syntactic parameters they should really you should have prosodic parameters and those interact to cause the syntactic parameters there's nothing that per se that you learn about the syntax of the language you learn the external constraints you like prosody and then the syntax falls out so what do I mean by that one of the articles I sent around I don't know if anyone read it because it's the last week and it was like 40 pages but what it was on was you know the WH parameter so traditionally you have languages like English that move question words up to beginning of the sentence yeah I'll also have languages like Japanese that keep question words sort of where they semantically belong wherever they are in the sentence they don't have to be the beginning so Richards this is the article I sent out notices that Japanese and English are actually doing the same thing so if you look at the Japanese sentence in a kind of you know spectrogram you look at how it's pronounced you'll see actually that what when you get to the question word there's a pitch compression that lasts until the end of the sentence and what's significant about that is present in Japanese that's where you have compliment isers that's where you have the see the scope of the sentence while in English it's actually on the left okay but in Japanese it's on the right yeah so in this in this sentence the question particle cop is left out do you know if and that totally fine yeah but do you know if the question particle car is left on the end of that sentence same thing happens I don't know I don't think his data talked about that okay I don't know and I don't know enough about Japanese that so but the idea is so when you get to the question origin Japanese what Japanese speakers do is compress everything from the question word to the sea which is on the right hand into one phonological phrase and the Richards ideas basically all languages try and do this they try and minimize the prosodic difference between the question word and where it takes scope in the in C now in English where C is on the left now English and Japanese I should say are the same in that they project phonological boundaries on the left side so Japanese can project you know from this phonological boundary and make one big phonological phrase joining all the rest of the sentence so you're all in one phonological phrase English can't do that because again if it as well forms boundaries on the left but of course the C in English is to the actually to the left as well so what English does is move the parameter out or the word out so Richard's account it's I'm not giving it justice you should take a look at it if you're interested but basically he makes an a whole typology of languages that vary on these two parameters whether you have a C on the right or the left and whether you make phonological phrases on the right or left and if you look at languages across the world you actually find out that in each situation whether or not you have a WH fronting or WH in situ language it actually simply falls out from the prosodic factors of each language and nothing else so you don't actually need a WH parameter in the classical sense and he can even account for some data of WH words actually moving to the right but you can't necessarily do in traditional but it happens in Basque but it's hard to do in traditional syntax so word order parameters are sort of the same so kind of more you pour notices that you know traditional accounts of stress what happened is you sort of build syntactic structure and as you build syntactic structure the phonological rules of the language assign where stress is going to be based on the structure it's building up okay now the problem with that as kind of moving forward notices is that does that might might be true but this kind of theory over generates the kind of languages you actually find in real life because theoretically you could have the stress at any particular place any particular word that doesn't actually happen what actually happens is that languages universally will stress objects over subjects and subjects over verbs that's intentional stress will fall on the object if not the subject yada yada so so really the classical theories over generate this but if you have sort of the present the perspective of well what's really happened is there's simply an optimization going on what ends up you can simply say instead of for example syntax feeding phonology a string and phonology putting stress in places you instilled said say oh there are prosodic parameters in different languages and each language will place its words in a place such that they reach receive the appropriate level of stress so that's the idea here and that was of course it went off the one slide where I have my own stuff and it goes off the side but this is an implementation of this in optimality theory if you don't know what this is this is it's fun but on the left these are all different potential word orders and phonological phrase parsing and up here you have different constraints so for example taught this is there are processing constraints there are you know funnel phonological constraints so for example topics should tend to come before you know non topics like subjects before objects you should have to okay expressed you should have a yambic stress stuff like that and this part of this part of my qualifying paper so if you feed a neural net implemented in you know optimality theory but if you the idea is if you feed a neural net these kind of constraints you can actually get the word orders for free you don't have to have separate your rules saying that English is an SVO language or something like that so as a reorientation that's through a bunch of linguistics at you as a reorientation though so linguistic alternations and this idea should be one motivatable by external you know interface constraints like phonology semantic stuff like this and two they should be accounted for with general not non UG specific cognitive architecture so the idea and this is something that ideally this does because you're a you're motivating things by these constraints and you also are dealing with them in such a way that uses what is modeled as a neural net which presumably is something similar to what the brain may or may not do so you're getting this kind of stuff for free is the idea so data like the previous previous are examples of both and you again this is very minimalist interpretation of both so we've I've described languages synesthesia but what does this actually mean for human cognitive life outside of linguistics so in this perception right you know language is kind of making the non-conscious aspect of the the sensory or excuse me the conceptual intentional system conscious in that spilling it into your sensory motor system so you all all the sudden you suddenly become aware of all these different heuristics that your mind had that you know weren't necessarily in in your brain beforehand you can introspect you can look at the sentences you're saying and does this actually make sense you can have multiple sentences sort of in your short time your short-term conscious memory and you can dwell on thought memorize things easier stuff like this now we can have multiple levels of cognitive we might have different levels of cognitive processes somewhat of which are more conscious somewhere less but this kind of synesthesia sort of brings them all to the same level so you can sort of use them you know for at for one end or something like that so some of some why is that this may or may not answer to your call so one is why is our conscious perceptive thought in language so a good question you might ask is well in you know traditional theories of grammar you have some kind of language of thought somehow it gets out and eventually you annunciate it and you know whatever parameters of your language and which are mostly based on you know these kind of interface constraints you know but a good question to ask is why when we're actually thinking in our head why do we think in English why don't we think in some other kind of form of logic so in a theory like this the the answer is what is happening is not that consciousness is just bubbling up until you know it comes from our on con or excuse me language thought is not bubbling up from the non-conscious mind into our consciousness out our mouths or something like that what actually is happening is that our conscious perceptive language is actually coming from the sensory motor system so you're actually you know because that is the sort of interface and you are aware only of the form of language which you externalize and nothing else so why are humans cognitive at least distinct from non-human relatives so we have similar abilities but a large portion of our general reasoning abilities are malleable by these levels of cognitive higher level cognition so we can as I said in the last site we can double think things we can evaluate things we can sort of you know although animals might have sort of similar heuristics and biases that we have we can sort of dwell on them and you know use them for more you know for thinking or or of course exchange them so another question is why do humans acquire language so easily well firstly they acquire the syntax of the language as early as they hear the prosodic constraints right so in this theory right when you are learning the prosody of language you are learning the syntax when you learn where stress goes in a sentence you're actually learning also where objects and subjects and stuff like this go or you know and of course by all accounts you know prosodic learning starts extremely early possibly even in utero so you know babies you know still unborn might know if their languages are wh fronting or not but that's just the possibility so that's one thing speeding up acquisition because there is really no syntactic acquisition it's just prosodic acquisition secondly languages are end up being local maxima of prosodic optimization and what I mean by that is you know if you imagine if you imagine well I can actually draw this yeah I don't have an award room I'll just do it here so if you imagine let's say we have in dimensional space okay say this is like this is two dimensions but imagine it is like in dimensional space though there's dimension one there's dimension two okay and these two dimensions are like the rank the importance of a particular prosodic constraint or something like that so these are two imagine infinity of them or however many there are so on this we have sort of topology of you know there are different lumps and these are lumps of prosodic well-formatness right so the higher the lump that's the prosodically better you know a particular setting of constraints is okay so in a theory like this since you have all of these different prosodic constraints that work together they converge on sort of an optimal solution and Richard gets at this like you know English's WH fronting comes from the other parameters of the language if we were basically every language is a Pareto optimal solution to different constraints if you change any of them they get worse so we're all sort of local maxima so at every point here is of you know all the possible languages you can theoretically have each of these points is like a language that could actually exist in real life so if you're acquiring a language and you hear data that's consistent with let's say a point here well you now know that if you want to get to a you know the local maximum or whatever you actually move up here you say okay I've heard data consistent with this language but this language is a local maxima I'm going to you know that that's probably what I'm actually hearing so you with this kind of algorithm for children to acquire you know prosodically well-formed languages you actually ease the sort of acquisition of how this happens and of course this again would happen for gajillion parameters not just to so it wouldn't just be you know that level so what else and another thing I guess I was sort of thinking about at the last minute is you know how does this compare to what Tom has been mentioning about you know Natasha's work about how we you know can hear partial sentences and you know don't understand anything I think there might be an answer even for that here in that I mean if you think about synesthesia so if you let's say you know you have synesthesia and you see a five all right and let's say five feels green to you just because green is the the letter or the color I had now if you see half of a five it's not like it's not like you're gonna see half green or something like that it has to be totally categorical so what what is actually happening when we're processing language is not so much we are hearing things our conscious mind is hearing things bit by bit and we're interpreting them rather the sensory motor system which is sort of you know figuring these things out for us will either send us sort of a full five giving us the you know send you know the the percept of greenness or the percept of meaning but if it sends us half of that well then that doesn't mean anything it wasn't interpreted in interpreted in the sensory motor system so it doesn't mean anything so that that's I don't know does that make sense right yeah yeah so it's it's supposed to be categorical so in order if you know if the sensory motor system sends you partial data then that doesn't mean anything because the idea is basically you're not doing the processing in your conscious mind it happens in this optimization between you know the sensory motor system and the conceptual intention was just the synesthesia idea is really cool because it actually does match the anatomy the stuff I was talking about yeah yeah that parent paper yeah oh yeah yeah they're being deeper connections literally between the auditory and sensory motor regions of the mind that's one of the genetic precursors of language so as opposed to other animals yeah that's common only to those vocal learning birds and humans hmm so that goes online okay did you did you have like a handout that you gave out no but it's just the name Eric Jarvis okay I'll check that out where is he you know okay all right so some conclusions are sort of restatements so the ideas language arose in humans not as a new mental operation like merged that you know that's comes the common in how's are in all but really a merger of two different systems that's you know likable to a synesthesia so consistent one with minimalist principles linguistic variation does happen at the interfaces not necessarily in the traditional way but what is effectively happening is that when you learn well you don't learn the syntax of a language you learn the constraints what is important in your language you know in prosody and things like this and the syntax sort of falls out from that so the the cognitive good of language is that you are made aware of all these like lower level processes in your brain and you can think about them well I listed them out again right so you can introspect second guess easier to memorize stuff like that and the idea of merge and the narrow language faculty isn't necessary isn't a necessity you don't have to think that there's an actual you know evolution in a lot of ways you know progresses by repurposing and changing old machinery not not necessarily making these new operations and it also allows you to say the other article I sent out the gallus the articles of course on some of the cognitive properties that are attributed to merge or the language faculty are actually present in animals and you can actually that this is consistent with that data you can say that animals do have you know numeracy and dikesis and some of the other things the gallus little talks about it's just that they don't have the you know they can employ it at the level we can because they don't have this synesthesia that brings it to consciousness so I think that yeah so that's it questions comments so it seems that you need to address certain kinds of phenomena that are not obviously related to the phrase in the same you talk about it in terms of intonation you might be mapping that or somebody might map that out too yeah but how in this way of thinking well I think there are different you send it like you continue well I think there are different ways of dealing with it one is that well there's the possibility that in the conceptual system you know something like an anaphore and something like a phenomenal or just different things and they're reflected in language like that that's sort of a cop-out but I think also I'm not necessarily I don't lend too much credence to a lot of the things about the different principles just because a lot of them are sensitive to things like linear order in certain situations and a lot of them you know they're sort of variation at the edges now that said there I'm fully I guess my perspective is there are some things that I can't explain including for example the difference between raising and control or something like that maybe so there are some things that traditional generative grammar does really well and I'm not necessarily taking those things on I'm just sort of leaving them focusing on the things I want so I'm basically saying I don't have a good answer that's what you're looking for but if you if you're viewing this as a project yeah then these are the kinds of things you need to integrate yeah was every language have an intonation pattern that satisfies the criteria that you need well their intonational patterns are different but they all what are you getting at there I don't know because I don't know about different languages are you asking if they're all like local max on that thing you were drawing well that's one question well I would I would say I would you know assert without sufficient evidence I'd probably say that that's the case that is no language could be prosodically improved in general by changing one parameter something like that now I guess another question would be are you know there's been much research on artificial languages can people learn languages that are sent violate UG in traditional terms one question would be can you teach in a laboratory setting someone a prosodically poorly formed language it's a question might be a way of pursuing this kind of stuff but I think in general implicit here is I would claim that yes I think that all languages in some sense all languages that actually just exist in the world aren't since some you know prosodically optimal for their you know local maximum different ways of assessing them I just don't know what about ASL like what's the intonation pattern or stress pattern there I'm just ignorant of it well I mean I can't tell you specifically about it but I guess my theory would sort of predict that in the same way that there are you know constraints the spoken languages there are constraints to how you contort your hands things like this and presumably languages that are built on different media might not necessarily share exactly the same kind of you know you know in different modalities you might have different constraints and you might expect languages to look totally differently in formal terms now I don't know enough about signed languages to opine on that but that certainly something I sort of want to know constraints well I honestly I frankly don't know enough about sign languages to even know how to validate them so I don't I'm not really sure I mean I guess well if I make some sort of drastic claim about why something happens in spoken language for I mean first WH fronting right so Richards claim about you know minimizing prosodic differences is there some corollary of that in sign languages if there's not maybe you know it's something modality dependent if it is maybe either the theory is wrong or it is there's something deeper than even the I mean first one one example I was thinking about is constituency right so you might want to put things and let me zoom back so you might want to put yeah you might want to put you know phrases in this kind of constituency because oh it's prosodically well-formed and it's nice to have phonological phrase phonological phrase phonological phrase and nothing else but it could also be something like just general processing right so it it's easier to process things in chunks and the phenology is just that the phenomenal the fact that we put them in prosodic phrases or something so I guess there are a lot of different ways to interpret it and I I don't know within the case of sign language where that would lead you go back to your chart the optimality chart right what was that so you've heard Dave Medeiros yeah that's right well I will say so I have a typology this this chart is not actually complete I ran I should have mentioned this but I ran you know certain constraints in the grant like and ran them through a program that gives you a typology of what languages can occur and my answer my theory is actually more restrictive than Dave so I actually rule out I think the OSV which is the one he rules out and also I think OVS one of the other object initial language I think it's so I have all the subject before but object languages and I think also VOS is possible now I could rejigger the constraints and get things differently but as my theory right here predicts you know for particular particular word orders and that's something like 99% of languages so I just sort of left it there so with OSV just be blocked by it's a myriad of constraints and that's what yeah basically and it's weird how this actually works out so one of the constraints that you know at least counts against it is topic first which of course one subject before objects that actually doesn't although if you get rid of that you actually get basically the same grammar it's really strange how it works what actually happens you get more I think VSO grammars it's something really weird but yeah it's really just a conspiracy of constraints none of them say of course you know star 213 but they sort of incidentally rule that out and similar word orders so that so the brackets are supposed to be like phonological phrases so for example phonological phrase so for example a that's like the subject is in one phonological phrase V O is in another phonological phrase as opposed to like D where the subject in the verb would be in one phonological phrase that's sorry if you're not clear from that before so I'm pretty hot this is my qualifying paper on sort of or this chart is sort of part of that so I'm happy with results there at least and yeah all right thanks okay yep