 Thank you. I sincerely thank the community for inviting me and for you guys to stay here to the very end It's a really honored a real honor to be closing the session So it's a very diverse session. We have talks from different field. So it's a cognitive very cognitive topic I'm presenting but in Long with the theme of the conference I will be presenting evidence or examples of how my cognitive studies have been helped by neuro ephematic analysis of brain data So object, what do I mean by object concepts? I take it from a very simple definition So suppose we see a very simple stimuli There are many things we know about this that's not contained in the external stimuli Like what this thing looks from the back what science it produces Where it can be found or something very abstract of the function to us as a human and of course the words associate with it So all these knowledge are represented in the brain. Where and how do we do it? and the dominant view of the recent two decades with help of neuroimaging is really governed by the philosophical Idea of luck that is we represent knowledge from experience So it's a very reductionist kind of view that we have all these rich knowledge But it's really reduced to the sensory motion experience We have with it like for the example I just gave what it looks like is represented in the high level visual cortex what sound it produces is represented in the auditory cortex and how it moves in the motion empty area and Depending on theory, maybe we need a binding site in the ATL or you know other code Cortical areas to bind these multimodal experience based representations And this is a very nice framework and mainly what was getting the evidence the supporting evidence from the earlier FMI studies like This is a pioneer study. So suppose the subject listen to a word or look at a picture for the same item when it's retrieving what color it looks it has it activates visual areas and that if you ask what kind of action it produces it activates frontal areas and the also many lines of evidence like if you compare food pictures to control stimuli you activate more strongly than insular The taste region and the activation is actually modulated by how hungry you are or the blood sugar So this is very nice framework and it has been a framework dominating our interpretation of the all the semantic activation areas But are we happy with it? I'm going to present two main questions we have been thinking about in our lab of How satisfying this framework is to explain our semantic representation and the first one is What is actually represented at these so-called sensory sites and? I'll be using shape visual shape object shape knowledge in the visual ventral pathway as an example to see is this a simple memory chase of the visual experience we have with the object and the other is example is I'm going to give is Really data-driven from how these widely distributed regions are connected the connectivity pattern is driving the understanding of our how we represent semantic knowledge So their first part is zoom in into the visual ventral pathway supposedly to be to be representing The our knowledge about the shape of the object So if we go into the high level visual cortex the signature is we go from very simple visual features But somehow get to the signature clusters to be more responsive to faces Sins like objects or animals. Where do these clusters come by? in most of the Explanation comes from the visual domain is how these different things look different like we look at faces with phobia We look at things with peripheral vision Animals have more curved features, but artifacts has more like straight lines But there's a recent wave of findings of finding very similar object category selectivity in congenital blind subjects and For instance people have seen Sin activation in PPA with blind subjects people have found visual work for me area when blind people read burial or They receive their letter sound by shape letter shape by sounds input and also some of my work from my own lab Looking finding that actually blind people also activate PPA when they hear names of big objects or tooth Just a simple example. These are usually how the experiments are done like This is the pericarpal camper place region that's highly activated by large objects or sins in comparison to other objects and the surprisingly We also find it when the blind Congenital blind who has never seen this hear the names of these objects So this wave reads the question of how visual this patches But these are really independent studies looking at different clusters. They're really looking for these results So in the attempt to understand more comprehensively more comprehensively how this patch of visual cortex is Visually based for object knowledge. We did a more comprehensive study of actually categorizing Exactly the functional fingerprint and the connexional fingerprint of each voxel in both blind and sighted and so for Each for both blind and sighted subjects they performed From my experiments when they heard names of many many different categories So for each voxel we can get the response profile the functional fingerprints for both blind and sighted and Also, we get the resting state connectivity pattern for each voxel in this venture visual cortex and by comparing the blind and sighted we got two maps and This shows how similar blind and sighted are for the functional and connectivity fingerprints. So the more red Indicates the blind sighted super super similar like we really can't distinguish one from the other the blue ones are the ones that are very very different So it's not sort of surprising that as you go more interior things become more abstract in a way But there are many patches that we didn't know about for instance. This is the post lateral fusiform is more different between blind and sighted and The medial and chair part is more similar So it's not a everything is visual or everything is more supermodel in a sense and the same thing goes for the functional fingerprints and Of course these two maps look quite similar So it's if one region is affected by visual experience in terms of functional Fingerprints is also affected by the connection fingerprints So these two goes hand-in-hand it goes along very well with the ideas the other speakers have talked about So we so what are these regions? We class of Category looked at three extremes. So the extremes that blind and sighted are super similar Like this is really the cluster that you can't distinguish blind from sighted in both of the connect Functional and connection of fingerprints So this is the parahepal campers and you really see whether the sighted look at pictures or blind listen to words They are selective more strongly activated by sins and furnitures to speak things Another cluster is more lateral occipital temporal cortex again for both blind and sighted they like things with body to function like choose body parts But if you look at the lateral fusiform area, that's usually the area holds both faces or animals When the sighted look at pictures They really like these mammal and so these are like faces mammal reptile birdfish bug But when the stimuli becomes worse to the blind sighted it becomes has no selectivity at all and also if you look at the connection fingerprints is also quite different and So this distinction between medial and lateral fusiform has been recently noticed Highlighted in a literature review neuroscience paper by gross vector saying how these two patches actually differ by cell type by connection by function and Our study added another dimension where they differ It's like this area is very much shaped by whether the input is visual or not, but this area is not and Why so we were puzzled for for quite a few years actually after we got those results And we think it may really relate to what kind of things they process It's for things selective to animals or fishes like in evolution There's not much we can do about them like we co-evolve with animals. We perceive them But for artifacts things are very different. It's like thing artifacts happen because of people We make the tools we make the complex tools for our function So the fact that something shapes like a flat is really we make it flat so that we can sit on it So that we can put things on it. We make it long so that we can hold it so our intuition was unproposed post put forward in a recent text paper we Wrote is that these animacy differences not only capture how they look But it really reflects out the evolutionary driven Different ways of parsing we have with these objects like for artifacts the shape indicate Other aspects of the same object, but for animals not so much so and So our speculation is that for artifacts the representation here is less visual because of this so did we have Express the evidence for the artifact Representation here. So we did explicit experiments just looking at the shape knowledge of sighted and blind So for these common objects The blind although they can't see it. They can tell you very well what it feels like it's a square It's a round it has a tail blah blah blah from other modalities and So this is this behavior reading so this is independent reading of how similar two things are in shape by college students and This is reading by a control sighted Group of subjects again the shapes in large matrix and this is a reading for a group of congenitally blind So the fact that the behavior reading is highly similar indicates that we don't need visual to drive the shame Same shape knowledge. The next question is where is this shape knowledge for the blind sighted and our Results indicate that both of them are you I T So we did representation similarity analysis for the neuronal responses in I T and so this is the neural RDM similarity matrix for sighted I T and this is the neural pattern for blind I T and both of them Significantly correlated with the behavior reading of shape and we did many control analysis like making sure this is not semantic similarity This is not tactile similarity But it's really this if you say the shape representation here doesn't care whether you learn the knowledge from visual from tactile from verbal so that's the Message for the first question. We hope to adjust We think about we talk about grounding information. It's like knowledge is grounded in sensory experience But we hope to illustrate that It's not that simple It's not that these these are purely visual formats representations But it's not that they're non-visual like super abstract knowledge either it differed by domain like animals Is more visual here, but artifacts is more supermodel because maybe because our system knows that through evolution Certain shape indicate other types of function The second question we ask is really puzzled by how these Distributed systems are bind together So we saw many lines here the lines are actually hypothetical We assume that they have to be binded so they draw lines here and this is supposed to be a hub so everything is linked here, but What's the reality how things are linked together and we actually didn't really have that much predictions about it If you believe these are sensory motor everything should be linked together So this is the reality This is a very good meta-analysis results that quite accepted for semantic processing That was done by Binder-Odall 2009. They summarized all the studies that were good comparing semantic understanding to other non-seminity processing and these are the activation peaks the God and it's really a lot and Would you interpret all these to be some kind of sensory motor representation and how are they integrated? So that's the question we ask like we don't know let's just look how the brain is connecting it So we did quite a few very simple connectivity analysis It's like what's a major white matter pathway and I like this one because it really what I learned from neuroinformatics So we just get all these notes the mask from Binder-Odall so he shared their meta-analysis results mask with us and We measured in healthy college students in the interesting resting state connectivity Just at risk how these things are connected and this is a connection with God a lot and a lot of connections But it's not everything and we did a simple modularity analysis just do they form some kind of community and Surprisingly, it's very very stable three modules. We did a large sparsity rank Like ranking from a very sparse to very fully connected the results are super stable with a many pre-processing different techniques we replicated the data on independent subject group and it's very stably segregated into these Three networks. What are they? We didn't really know so just by eyeballing it looks to be very three very familiar networks so we already know about DMN default mode from the previous talk and The frontal prior to control network is very very similar to the left side and this is a very classical language side Okay, and also by doing this kind of modularity analysis we were able to identify what knows are important in linking these systems and Again, quite to our surprise. We got anterior temporal lobe posterior MTG angular darts and Mfg and these are the regions throughout the literature from many different kinds of evidence had been said to be semantic hubs and They are really from semantic dimension evidence from fm evidence from lesion evidence But indeed in this network community structure. They are the connector hubs But there are different types of connector hubs like here is connecting These two networks and the pink dot is connecting these two networks So it's converging them to the previous knowledge, but adding new information what they're connecting So what are these three networks? So as I was explaining so we went back to the literature To see what are these three networks just from the anatomical properties? And I was saying so this is a very accepted high-level linguistic processing mask a localizer from hundreds of subjects performing language versus non-language task and This is DMN default mode But this is default mode usually we think about default mode of self-simulating of thinking about the inner internal self but another aspect of the default mode in a different context is These are the regions that multi modalities of the information is converged So this is from a math analysis where they contracted people thinking about the audition vision tactile And then they look at the overlap regions and these are the default mode the core regions And this is the semantic control from independent studies of showing high versus low semantic control contrast And then we think they align quite well with the networks we identify So we we tentatively proposed That a cementing processing is really the orchestration of these very three different three systems and what are they and To be very honest We are very careful of the interpretation of the functions because we are really identifying from the brain patterns Well, what kind of functions do they serve and so in combination with the cognitive theories of cementing processing We guess that these are two formats of representing semantic knowledge One is really through different kinds of experience The other is really through symbolic processing. It's like how we learn so many Knowledge through words through language and this is how it builds it and of course it needs the control system to act upon all these representations oops and again a tentative preliminary results that Before we think about semantic space we think about how things look like the semantic features We took another approach So this is a language corpus based distance mistakes, so we get 360 words And this is the how closely related they are using things like a word to work It's purely driven by language corpus co-occurrence using the kind of neural network model developed by Google and We can build this similarity space is how I'm close often they'll occur in the language context using statistical learning and we look at this Language distance base and then we also got bold responses for each of the 360 words And then we correlated neural pattern with with this and I want to highlight here So these are the three modules the path the neural pattern with the three modules And it's really the this green module the response pattern of the green module corpus with the language space But not the other modules So I think his indication that indeed it represents the mental but from the very language environment and just a final note of how when we look at structural connectivity is Sort of converging on finding three modules as well The I want to skip this for the sake of time so I started but So this is the summary of the second part of how things are connected Do we find the connections reflecting these sensory motor experiences? but no actually we were surprised by the three modules we found and These three modules we think actually help us to understand how different formats of semantics can be found in the brain So I started out by asking two big questions of the accepted view of semantic processing and I want to conclude By saying that thinking about all semantics are just our experience with the world and combining it is not all Firstly the experience-based representation is more abstract and more complex to one thing it's effected by very much by the domains of knowledge we have and also I Wish to bring language back to the semantic representation of humans. We learn so much More knowledge with words We learn about the meaning of a new word using the explanation from other words So this is kind this kind of space is not captured by motor sensory motor experiences So this is something I'm proposing. I'm starting to think about of course There are many new questions to be answered if we think of a mental representation this way, but that's other future work to be done So I want to thank on my lab at BNU Beijing Normal University It's a very nice institute and I welcome you guys to visit and my collaborators of Suncar Mazda, Harvard, Merit's Peelan, and Chimac These are a collaborator for my blind studies and my local collaborators who helped me with the network analysis Thank you Questions for Bench out? Yes What do they Analysis so you're arguing about symbolic that goes with symbolic network, so but the in the world They think about essentially preferring to symbolic to semantic content I said right there to find these word vectors of the asymptomatic words. So why do you think that then Also address your semantic network It really depends on how it defines semantics so And so if we think about the brain semantic network, this whole thing is the brain semantic network So it's part of the brain semantic network. It's just a different format We go back to this diagram of the 3D network control network would be the blue one So why I mean why would that so would you visit this network somehow to? control the Function of the other networks. What would you see that as just a way of linking them or so? What is the role of these cups that you have that in between these networks and What is the role that the networks themselves play? Yeah, that's definitely to be honest. I don't know but I think That's why I think this way of thinking of this network really raises many new questions that we didn't think about before Because before we think about these regions. We study them individually. We would put attack them Okay, this is a more visual because it's close to empty. This is more Audition because it close to STG but this The fact that these because belongs to a same community they perform their link together for some function but then do they What's their role in this big community and the hubs are they just transferring information? Are they more important for both? I really don't know but we didn't ask those questions because we didn't know the structure So this to answer first question. What was this control thing? Oh, so we sort of need control for all cognitive processing tasks But these regions are really found to be more strongly activated by semantic control relative to other things that are matched on task difficulty if you're matching them on RT and we think both of them are very complex representations like these We have experience information for like things like cell phone how we touch it how we use it Well, it looks like to answer them in the appropriate context. We definitely need to control So I don't have a better explanation of that so We are results actually highlight and tear a temper load very well so Although this one Becomes the grain because you more the larger knowledge that you have to assign it to some module But actually it's really a strong connector these two for these two types of symbolic and the experience of Representation of semantics. So it's these two numbers the red in the green unbound here So it goes actually very well with the cemented dimension evidence that if you damage here You lose the contact for both verbal and non-verbal semantic tasks So that's our interpretation. I think it offers actually the direct evidence for a to be one of the hubs Explore the matter of which you explore the connectivity Is it exactly the same of the studies in the right way We are asked this month the results from binder at all just took their meta-analysis results the peak boxes Then we build a sphere around those peak boxes It's true. I'm confident because actually binder wrote that takes paper 2011 commenting how this semantic understanding does not have very modality specific activations So he would it is a big discussion about it But the task that was driven by these meta-analysis is listening to understanding language So I think the peripheral aspect of object processing is partialed out because maybe you don't need those to understand language The actual picture Yeah, there are but these are the role data of all the studies and these are the Meta-analysis that the activation likelihood is above the threshold So the ones that tend to be more consistently consistently activated across studies