 Fatma. OK, thanks, Kevin. Thanks, everybody, for coming. This is exciting. So yeah, I'm Fatma Imamoglu, and I am a member of the Gaon Lab at the Helen Mills Neuroscience Institute. And today, I will talk to you about semantic representation during reading and listening. And so a recent study published in Nature actually last week by Alex Hoot, another postdoc in our lab, showed a very detailed semantic map of the brain. And in this study, Alex, where a participant listened actually to natural stories while their brain activity was recorded using functional magnetic resonance imaging. And on the screen here, you saw a movie about one participant's brain where the semantic concepts were mapped onto the cortical surface of that subject. And different colors on this map mean different semantic concepts. And I will go more into detail on this. So you will see more of these maps. So I want to introduce you or orient you in this map. So this is the left hemisphere. This is the right hemisphere. Here is the back of the brain, which is the early visual cortex. And here is the prefrontal cortex. And here goes temporal and provides a lobe into each other. So based on this knowledge, we wanted to know now whether semantic representation is similarly or differently represented across modalities. So we went on and created another experiment where subjects read and listened, again, natural stories, several hours of natural stories in the MR scanner. And the presentation time of each word was matched to the presentation time of the spoken word in the scanner. So let's try if this will work, hopefully it works. I reached over and secretly undid my seat belt. Anyone hear it? I don't actually know where to bring the thing is. But at least this is when you would read it. I reached over and strictly undid my seat belt. Let's go on. So once we collect the brain data, we then go on and select semantic features from the stimulus. So this semantic features was based on a co-occurrence model, which says basically, for a given word in the stimulus, what is the probability of each word's co-occurrence with another word in a given lexicon. And now we have a model of the stimulus. We have the brain activity, so we can go ahead and try to estimate model weights using a regularizer renewal regression and try to find a best match between the semantic feature space and the brain activity. And to validate our model, we collect completely new brain data for reading and listening. And we use the estimated model weights from the reading experiment and try to predict the responses of the brain activity while you were listening to the stories. And after we predict the responses and we collect the new brain data on the listening data, we do a correlation coefficient between these two data sets and map this onto the cortical surface again as a model performance. And we do this cross modality for listening and then for reading as well. So when we do this, we see different brain areas across the temporal, parietal, and prefrontal cortex are well represented by this semantic model for listening data, predicting reading data. And when we do this the other way around, where the reading data predicts the listening data, we see exactly almost the same correlation maps. And here the highest correlation is 0.5. So when we do this, we still want to, sorry, maybe I should step back one slide. Now we have the prediction correlations, but we still want to know what these voxels mean, what these predictions mean. So what we can do is we basically can use the model weights and map it to a principal component analysis and try to figure out which components are the most spanning components of the semantic feature space. So when we do this, we get three main components. One PC is depicted with the red here, is more like representing human social concept type of things. The other component is the second component PC, which is depicted with the green color and is more like sensory and numeric concepts. And the third component is more like bluish things, which are for abstract representations. We can then go ahead and also look for each separate PC, how the reading and the listening map are correlated. And the correlation is for the first three PCs, also this other two, but the first three PCs that I've mapped on the cortical surface very high. So we can conclude that semantic representation is very similar across the human brain and across the different modalities, importantly. And these are across the temporal cortex, parietal cortex, and prefrontal cortex. So the study is actually also a very good example of reproducible works for, which is my main interest at BITS as a data science fellow. So with that, I would like to thank my lab, the Gallant lab, Alexander Rood, my collaborator on this project, and thank you all for your attention. Thanks, Pat.