 the future presentation of Pickering and Garrett's 2007 article. I'm calling it processing the future because Pickering and Garrett are proposing a model where in a listener's brain tries to emulate what the upcoming language will be while they're listening. So the first thing to me is that everybody has to get on the same page and Pickering and Garrett are proposing this by saying that participants synchronize a number of things when they're in conversation with each other and that can include the rate of syllable in running speech. It can include things like imitating each other's grammatical structure. So if I say something like, oh my son got a banana for a snack today, you might come back with what kind of snack did he get? Whereas if I had said a snack was taken by my son, you might come back with what what kind of snack was taken. So in the first example I'm using an active voice and in the second I'm using a passive voice and the speaker will synchronize that more or less. Certain aspects of meaning on the semantic level can be synchronized. Phonetic realization of repeated words. So if I happen to say something in a specific way, particularly a word we're using over and over again, so let's say I'm talking about my fantasy football league and you're talking to me, you might raise and tense the a sound the way I did when we're synchronizing. The amazing thing about this is that some aspects of this are even seen in children as young as four or five years old. My suspicion as to what's going on here is that mirror neurons are in effect. If you happen to have the men cycle linguistics book you can read a more full explanation of it there but essentially they're a specific kind of neuron involved in dealing with social interactions where when you are watching what someone does you're simulating that in your head with these mirror neurons to help you do those things at some future date. The difference is here that the future date is very near in the future and it pertains the language. So what what Pickering and Gerard are suggesting in their emulator model is that listeners aren't just sitting there processing language. What they're trying to do is predict what's coming next and listeners can do this when the context is strong and they do it on a phonological level so that each bit of the word is narrowing in and affecting the per affecting the prediction as they go. They do it on a syntactic level so the grammar they're trying to figure out what grammar can be coming next and they're doing it on a semantic level you know like what are the meanings of the words that I'm hearing now what sorts of words are going to be coming next and how does what the meaning of what I'm hearing now affect the meaning of what's coming next and essentially the the emulator is like an activation of Broca's area which is a part of the brain involved in vocabulary production not interpretation production so what they found is that the muscles of the tongue and lips will activate to some degree while people are listening to speech but it's not just any old activation apparently it's a mirroring oh here's the mirror neurons in effect again maybe they're mirroring what they're hearing from the other speaker we also see increased activity in the Broca's area while listening despite the fact that Broca's area is involved with speech production so this is really interesting while people are listening they're activating areas related to production of speech so what's causing this and I think I think this is a quote worth pulling out language comprehension is highly incremental and that means that we are trying to interpret the language as it's coming at us you know even down to the level of individual phonemes and the prediction is trying to narrow down what word is coming with readers and listeners extracting the meaning of utterances as they encounter them so it's a process as you go thing with language and I think this is a really important thing because if you're processing as you go this prediction doesn't seem so wild you know because if you're trying to predict what's happening in the moment you're narrowing down the options as you hear various things so if we're in a library and the first thing I say is your brain is already actively interpreting and trying to predict so what's going on what could possibly be we're in a library he just said but so your brain is trying to get ahead and it's narrowing down so maybe I'm talking about my friend Bob maybe I'm talking about books maybe I'm talking about bananas but books seem likely in a library unless we've just seen my friend Bob but bananas don't seem so likely so next I go so I said and all of a sudden your brain is zeroing in trying to predict what's next and it's like I'll bet you I'll bet you we've got a coming nexo book and oh we're right and there's a payoff you know in your brain from this prediction that allows you to really keep the processing speed of language high and I'm sure if you can tell from listening to me I'm speaking very quickly so you're trying to predict and keep up with my quick rate of speech so is it just language that human cognition uses this sort of predictive pathway for and it seems to be a resounding no so other predictive systems here so sequences of pictures are shown of people doing an action oh look I wrote shown twice so they'll present people with a sequence of pictures of people in various poses and when the poses are reasonably in order people make that decision more quickly as opposed to when they're kind of out of order because what they're doing is they're trying to make predictions as they see each image and what's going on in their head is they're thinking okay based on past experience what is it that I personally would be likely to do next and when we get a lineup of the prediction of what I'm going to do next versus what I'm seeing in this series of pictures it happens much much faster as opposed to when they're out of order and this prediction seems to apply to language too according to Pickering and Garrett and I don't see any reason to disbelieve it because if we're seeing people do this in other areas why wouldn't we expect people to use cognition to do the same things they're doing with say motor sequences and linguistic sequences it doesn't seem like a large jump we've already got the cognitive machinery in place to make this sort of prediction and you'll notice that maybe even you and your personal casual use of language when someone says something weird it just it pops right out at you and this implausibility or the unexpected comes out most strongly and highly predictable context so let's say we've got a conversation going on on a specific topic you're making these predictions and then all of a sudden someone throws in something that's like well where did that come from we're talking about we're talking about getting snacks and all of a sudden they're talking about baseball what i don't get it so i think that has to play with why non sequitur utterances throw people so much in the moment and this predictive ability allows for people to gap information so say you're in a noisy restaurant and you're talking and there's a crash you can still figure out what people you know crash of like a plate being dropped you can still figure out what people are saying because your brain is making these predictions and these predictions help gap you over into when the crash sound is over and you can resume hearing what they're saying clearly again and it's really socially aided but again everything with language is social so having this given take where your brain is activating the production aspect to basically pretend to be the other person in speech to try to predict what they're saying you can gloss over these you know noisy backgrounds you can gloss over a fast rate of speech and it makes it i think a faster way to process language if you're actively participating in what the other person is saying so i think this this emulator model is pretty robust in its description of what's going on and as someone who uses language even in a casual fashion it it seems pretty comfortable that this sort of thing is going on what i didn't know was that it's trying to invoke production as opposed to just some straight up your brain is making guesses but your brain doesn't seem to be just making any old kind of guess it's activating the areas where you talk broke his area coming up with vocabulary tongue and lip muscles to you know maybe make tiny tiny motions in prediction of what the person is going to say next and i think this is pretty robust given that again from a casual language user's point of view you can communicate quickly even in a noisy environment and i think this model explains some of what we're seeing thank you