 introduction and for inviting me. Yes, so we are interested in understanding the brain mechanisms underlying language, particularly word learning and word processing. So we started from a theory, a neurobiological theory of word learning, which has been proposed over the past 15, 20 years, and which has again quite a lot of experimental support. In the theory, the idea is rather simple. So what happens when children start to learning how to speak? So when they start bubbling and early word learning, what happens is articulation of a syllable or a word leads to activity in the inferior frontal lobe on the left side specifically here. And at the same time, the sounds produced by this, the articulation of the syllable are perceived by the superior temporal lobe, by the auditory cortex and the areas that are sensitive to speech perception. So the theory says, well, when you have concomitant activity in the brain and you have habit learning mechanism, what happens is that these patterns of activity will get associated together. And so this leads to the emergence of strongly connected ensembles of neurons, which are distributed over these perisilvian areas, so areas around the sylvian fissure. And these structures, these cell assemblies, have been hypothesized already a long time ago by Hebb in the last century, and also more recently have been called by Fuster as action perception circuits. But the idea is that these set of cells are strongly and reciprocally connected and emerge as a result of learning, habit learning. So now, what I mentioned is the emergence of a circuit which associates articulation with auditory sound perception. But I haven't said anything about meaning. So how do we learn the meaning of these words, of these circuits that emerge? So by the meaning of the same mechanism, so again, habit learning, we take the view that language acquisition is in the sensory motor experience. So basically, when a new word form emerges in the brain, then its activation correlates with the presence of sensory perception or motor action. And so these distributed cell assembly now link up with the corresponding activities in the corresponding sensory or motor courtesies. So for example, when we learn the meaning of a word which has a very strong visual component, what will happen is typically in this learning situation, the use of the word co-occur with the presence of the object in the environment. So the child will use the word or will hear the word at the same time as the object is being perceived. And I'm showing here, for example, the identity of the object which is represented by in the ventro stream, in the ventro visual stream. So at the same time, we have therefore activity in the Perisuvian areas here. And so the co-presence of this activity leads to a larger cell assembly circuit, which now includes both Perisuvian and extra-Sylvian circuits, particularly in the temporal law, in the ventro stream of the visual cortex. And the same thing, the same idea goes for when we learn the meaning of action-related words. Words which have a strong action component typically are learned while the action is being executed. So action-execution leads to activity in the pre-central and post-central, and pre-frontal as well, and premotor courtesies. Now when this activation co-occur with the word circuit activation, so with the usage of the word, then the cell assembly extends from just the Perisuvian area and binds with cells which are active in the motor cortex. So the interesting point here is that the cell assembly now there is a strong neuroimaging evidence that shows that are automatically reactivated. So when we hear a word which is, for example, an action word, like kick or pick, then activity in the motor cortex lights up automatically and even without the need to pay attention. So this is basically the idea is that the meaning of a word is grounded in the motor cortex, in this case, or in the sensory cortex, in the case of visual object words. So again, to maybe summarize, what are the brain areas therefore that I'm considering for the purpose of this talk which are relevant for word learning and meaning? So during word learning, information about articulation leaves the inferior motor cortex. They're part of the primary motor cortex in the inferior strip. At the same time, the perception of the corresponding sounds activates neurons in the primary auditory cortex. At the same time, if we perceive a visual object, then information comes in through the visual cortex. And if we execute an action, a motor movement, in this case a hand or an arm movement, is typically controlled by the dorsolateral part of the motor strip. So we have information also leaving that activity there which then, of course, is carried down to the muscles and to the movement. Now these areas, these four primary areas, are actually neurotomically connected. And so what I highlighted here in colors are the cortical areas through which bundles of fibers go which enable linking up of these four primary areas. So these areas were for us relevant in trying to simulate the acquisition of action or visually related words. And so that's what we did. So we identified these cortical areas relevant for modeling, learning of words, initially only words circuits, phonological words, and then to the extension of these two words with a meaning, either a visual really related meaning or an action related meaning. And here we therefore built a model which replicates carefully the connections that we know exist in the brain between these cortical areas. So now let me give you a bit of a details about the model. I don't have much time to go into much details, but I like to stress that all the components of the model are neurobiologically realistic. And as I mentioned, the connections have been implemented because of existing neuron atomical evidence for such links. So each area here actually consists of two layers of excitatory inhibitory cells, excitatory cells. In this case, the latest version of the model that we have is actually a spiking version. One important thing to say is that the links between the areas and also within recurrent links are not all as many neural network models assume, but are actually random and sparse and topographic, for which we know that's the case also in the cortex. Here I'm just showing you how these projections from one cell in a certain area actually are topographic. So one cell projects to a set of neurons, which is topographic corresponding in another area. And the probability of two synapses to be created falls off with a distance, again, for which we know there is evidence in the cortex. So essentially, we try to, by using these topographic spars and random connectivity to imitate the patchy, the typical patchy patterns of synaptic connectivity that are found in the mammalian brain. And for example, just to give you an idea, what I'm showing here is actually an example of cluster fibers in V1, in primary visual cortex, showing where the synaptic connections between a neuron and other neurons nearby happen. And so here what I was representing by the gray cells was exactly the same, so the points where each cell was making contact with other cells. So maybe you can hopefully see a resemblance between these type of projections. Right. So just briefly to mention, of course, the two layers of excitatory inhibitory cells are closely coupled by means of which we actually implement lateral inhibition. And there are also global inhibition circuits, which regulate the total activity within the network, but I'm not depicting these here. Finally, as I mentioned, the model of a single neuron is just a leaky, integrated, fire neuron with adaptation. Don't want to go into the details, but you can find these in the publications. And finally, importantly, the learning rule. So the model, as I mentioned, contains Hebbian learning, plasticity, which is voltage-based and non-homostatic. And this is essentially a rule which is based on the post-synaptic membrane potential. There are two thresholds and one threshold for the pre-synaptic firing rate. And it's based on the LTP-LTD model of Artola and Singer. So sorry, I just skipped the references. And one final detail is that the spike-driven version of this rule basically means that we apply those synaptic changes only when there is either a pre- or post-synaptic spike. So notice that this is not a direct implementation of spike time-dependent plasticity. But it's a voltage-based rule, but it's spike-driven. So how did we simulate the actual semantic grounding of object-related words? Well, we took the model, and as you can imagine, we presented triplets of patterns, which we interpret as being sensory motor patterns. And of course, in the case of an object-related word, we have co-occurrence of auditory, articulatory, and visual perception. And in the case of an action-related word, we stimulated the network repeatedly with patterns, again auditory and articulatory, and to the motor cortex part of the network, represented the execution of a movement. And what we observed, after many repetitions, we're talking about 2,000, 3,000 presentation of each triplet, what we observed is the emergence of such cell assembly circuits, which I mentioned at the beginning, this theoretical construct, which have been hypothesized. And actually, we observed this. And you can see here, I'm showing in white. I don't know if everyone can see. The cells, which belong to one example of such a circuit in the 12-area network. And as you can see, the cell assembly extends from these six bottom perisilvian areas to the top extracellvian areas. In this case, being stimulated only on the right-hand side by the motor area, the cell assembly extends towards this direction. But in the other case, when we have object-related words, then of course, we have the cell assembly, which extends on the other side. So now I'm just briefly going to show you a sequence of snapshots showing what happens after learning such type of words, what happens to these circuits when you only stimulate them with the auditory pattern, which corresponds to perceiving the sound of a word. So you can see now the network responds initially here in the auditory cortex. And then information quickly spreads. These are different simulation time steps to the rest of the entire circuit. So there is what we call ignition of the cell assembly, which, as you can see at this time point, there is no input. But the circuit is self-sustaining. And there are, because of course, of the recurrent and positive feedback loops which exist between all of these cells, excitatory cells. And one important thing to note is that the pattern which had been, of course, associated here, the visual pattern in the visual cortex is partly, not entirely, but partly reproduced by the ignition of the cell assembly, which is a desirable phenomenon, of course, of pattern reconstruction. And after the presentation, the cell assembly switches off due to the inhibition mechanism. Now I have a short video showing you what happens if you actually present, again, a stimulus here, but not only for a short time, but if you keep it present there in the auditory cortex input, you will see the cell assembly igniting as soon as this button is clicked. So this is what happens. We use in this simulation a large amount of noise just to make it a bit more realistic. And one interesting thing is that actually the ignition and switching off of cell assembly spontaneously happen in the gamma range. So this is a nice thing. OK, so what can this model do? Well, using this or previous versions of the model, we explained quite a range of different data, including experimental data on automatic auditory change detection, also effects of attention on the brain responses to world and pseudo-worlds. And as I mentioned, different oscillatory responses to world and pseudo-worlds. And also the spontaneous emergence of the topography of memory cells and the emergence in this case of what I would call free decision to speak and act. So the model explains this by simple spontaneous ignition of these cell assemblies, we can explain why certain areas, in particularly inferior prefrontal cortex and posterior superior temporal cortex, are those where activity happens previous to speech production. And so the model explains this. So in a way, I'd like to say that this model, although had been designed for explaining the language process and language acquisition, goes beyond its original design. But now let me go back to the theme of this talk, which is meaning and how we learn the meaning of words. So one thing which the model and the results that we have actually help explain is the question of why there are areas in the brain which care about different types of categories. So there are specifically areas in the brain, for example, the motor cortex, which as I mentioned, respond specifically to categories of action words and not to visual words. And vice versa, there are areas around the visual cortex which exhibit activity when we hear a word which has a strong visual components, such as sun, for example. And in the other case, such as if we hear run, we would have activation there. On the other hand, there is an issue which is that there are other words, other areas in the brain which are normally known as typically known as semantic hubs, which seem to be active regardless of which semantic category is being perceived or processed. And so these semantic hubs, essentially, which are typically there are different ones, but here, for example, inferior prefrontal cortex or also anterior temporal pole and posterior middle temporal gyrus seem to care for general semantic, so for general meaning. And how can we explain the presence of both of these types of cortical areas? Well, that's what our dissociation that we found previously in the model shows. So basically, the cell assemblies exhibit a double dissociation in their distribution. So if we look at the visual cortical areas, V1 and temporal occipital areas, you'll see that in the case of an object word, cell assembly reach those areas, but they don't show much presence in case of an action word. And vice versa, if we look at a cortical regions where the motor cortex is, so around N1, dorsal N1 and premotor cortex, there is almost no activity due to object word, cell assembly, but there are very strong cell assembly for the action words. Now, this is basically explained, obviously if you want, why there are category specific cortices. But now one interesting point is what happens in these semantic hubs. So if you look at these two types of cell assemblies, you'll see that both types of circuits, so for object or for action words, they coexist within these four central areas, specifically within the semantic hubs in the anterior temporal lobe and the prefrontal cortex. And why is that the case? Well, obviously because of the connectivity. So the cell assemblies are allowed by the connection between these hubs, by these connecting hubs to extend, to freely extend also to areas which are not necessarily needed for connecting the pattern. So if we're talking, for example, about an action word linking this pattern to this one and this one, the cell assembly does not necessarily need to go through this hub. However, because of the connectivity and because of the properties of cell assembly growth, which is spontaneous and driven by happy learning, the cell assembly spontaneously grows also in that area. And so the co-presence of equally strong circuits for both categories in these semantic hubs leads to the impression that these hubs are actually general, whereas what we are suggesting here is actually what we see is the co-presence of different circuits in the same cortical areas. Okay, very brief mention that we tested this model in an fMRI experiment. We showed participant pictures of animals, so visual words and of actions, and we associated them with novel words. So we tried to teach them the meaning of novel words by pairing the sound of these words with either picture of animals or picture of hand actions. And when we looked at the brain responses, what we find was here, these are the areas which respond more strongly to the animal pictures or object pictures. And if we then stimulate them, we present them the words that they learned, this is what we find. So if you compare the two, there is a re-activation in the visual cortex. So the visual cortex which represents the pictures when these are perceived is now reactivated by the perception of the learned word. And what is interesting is that there is nothing there for the action words. So the action words produce no activity in the visual cortex. And so this strongly confirms the model prediction and the theory. So in summary, I presented the neuro-mechanistic model which attempts to simulate word learning and semantic grounding solely on the basis of three elements, sensory motor experience, habitant mechanisms and connectivity structure, and these three components lead to specific cell assembly distributions which in turn explain the emergence in the cortex of two different types of area, category-specific and general semantic hubs. So category-specific is basically what is known to be embodied theory, embodied condition, whereas general semantic hubs, if you accept the term are disembodied. So the model explains both. Maybe I'll stop here and leave this for you. Okay, thank you. Okay, so happy I'm learning in this case that does not use homostatic mechanism. I think that what you want to hear is network self-regulatory mechanisms. So global inhibition is I think the closest that would probably, I don't know if you would call global inhibition an homostatic mechanism, but that basically prevents the network from exploding or having epileptic behavior basically or from dying, from having no activity at all. But there is no other homostatic mechanism in the synaptic plasticity mechanism. So global inhibition, I think that's the answer that probably to your first question. And the other question is, so it depends what you mean by, so what phenomenon, you mean, if we still want to have this distinction between category-specific, right. So, right, so spiking is definitely not necessary because we have a version of the model which is perfectly, actually the first version was graded response. So we introduced spiking to make it more realistic, but that's not necessary. The connectivity, basically the map, the application of the cortical connectivity, I think is crucial. So if you don't have a network which reflects the existing structure of the cortical areas that we are simulating, then of course we cannot claim that this structure leads to such distributions. So if we change the connectivity, if we change the structure of the network, then of course the distribution of the cell assemblies would change. And yeah, one thing which maybe I should mention which is important is also the presence of noise. So neuronal noise. And that's why actually the cell assembly circuits do not extend to the sort of, so to speak, silent branch. So here when we present the three patterns here, the cell assembly extends here, it grows, but it does not go there. And the only reason why it doesn't go there is because there is noise. So if there was no noise, if that area was silent, the cell assembly would spontaneously grow also into that branch. And in fact this is in line with some data from congenitally blind people who exhibit responses in the visual cortex when they hear words. That could be explained by the growth, by the spontaneous growth of these cell assembly circuits in areas which are silent, which are not, so to speak, suppressed by noise. Well, the question is for you as sort of out of time. So let's thank the speaker again.