 So thanks for the overwhelming introduction, Giorgio. So yesterday we probably realized that brains come in all shades and colors, and today we've also been introduced to the scale, the complexity, and the size by Jeff. So today is the day of color blue. So I'll briefly provide a flavor of one of the manifold ways of reconstructing and modeling a piece of brain tissue. So this is what we've been doing in the blue brain project to reconstruct and simulate a piece of the neocortex. So I'd like to start by quoting Christoph Koch and Clary. So basically, Christoph and Clary said that at any large neuroscience gathering once really stuck by the pace of discovery. So there are thousands of neuroscientists running away in all directions. It's like a scientific big bang. So I mean, independence in neuroscience is really necessary, but this has prevented neuroscience from entering a more mature phase, which could involve developing common standards and collaborative projects. We heard a bit about common standards yesterday. And of course, neurophysiologists are more likely to each each other's toothbrushes than share data and models, of course. Okay, so I mean, this is how I see the field to be fragmented. I mean, we're all these different neuroscientists, we like doing our own little things, our own niche areas. Then, of course, I mean, to understand the brain, it's imperative that we bring about all these approaches together. I mean, all approaches are equally necessary. It's really joining hands, multiple disciplines that will help us understand the brain. So one such integrative approach is what we've been developing in the blue brain project. So in a nutshell, what this approach entails is we obtain sparse experimental data across different levels of biological organization, neurons, synapses, connectivity, microcircuit physiology, identify certain inductive principles, rules of organization across these different levels of biological data, and then use these rules and principles to constrain algorithms to then build in silico models of these different biological layers, and then integrate these component models into a complex or a dense reconstruction, which we call, I mean, this whole process, I mean, it's akin to basically an inverse problem. And of course, it's also important to ensure that these models at different levels are actually validated to make sure they're consistent with what's seen in biology. So, I mean, this is more of like a detailed view of the so-called iterative, predictive reconstruction and experimentation, this technique that we've been developing in Lausanne. So we gather experimental data, use these to build unifying brain models, build several models, simulate these models, analyze and visualize the outcome of these models, validate these models, as I mentioned before. So this is basically like an endless loop. So you can go on doing this, refine your model forever. So it's never ending to speak up. So the reconstruction workflow that we've developed in the Blue Brain project entails these different steps. The very first step was to actually map out the diversity of morphological types in the neocardical microcircuit. So if I were to give a metaphor, so imagine you were tasked with squishing the complexity of the Amazonia onto a pinhead. So it's something like that. So I mean, in the Amazonia, you have flora of all shapes and sizes, like apple trees, orange trees, mango trees, whatever. So similarly, in the neocardical microcircuit, you have about 55 different morphological types. So there are martinote cells, their basket cells, their chandelier cells, their pyramidal cells, of course, from layers two, three to six. So experimentally, we mapped out that across six layers in the neocardical microcircuit, there are about 13 different excitatory morphological types and 42 different inhibitory morphological types. So these are the different trees of different shapes and sizes in the Amazonia. And then having mapped out the diversity of morphological types, the next step was to actually clone these morphologies in sufficient numbers. I mean, of course, even if you were to do experiments in a lifetime, there's no way we would be able to obtain unique morphological reconstructions for all the thousands of neurons in a tiny part of the brain. So then you had to clone these morphologies in sufficient numbers. So we had a representative set of the 55 different morphological types. So now having mapped the different morphological types, the next step was to actually map out the dimensions of the circuit to populate these morphologies. So this process entailed two steps. So the first step was to map out the thickness of a prototypical neocardical microcircuit, which is roughly two millimeters in height across the six different layers. And then to map out the individual layer thicknesses from one to six, as well as to measure the neuronal densities across these different layers. Now having mapped out the thickness of all six layers and the neuronal densities across different layers, we were then able to estimate the number of neurons across these different layers, as well as estimating the diversity of morphological composition across these different layers. We know, for example, that layer one only has inhibitory neurons. There are six different types. So what are their proportions? So neuron type A in layer one is about 30 percent, type B is about 50 percent, so on and so forth for all the other layers. So now having mapped out the diversity, having populated these in a network dimension, the next step was to actually connect these neurons. So neurons as we all know are promiscuous beings. They don't like to be solitary. They like to make contact and connect to each other. So the connectivity was actually derived algorithmically by dumping all these morphologies to speak up into a bucket. That's this network dimension. And then running an algorithm on a supercomputer that detected all possible axo-dendritic oppositions between all neurons. So you go to each neuron and ask the axon of this neuron, what are the oppositions that you're forming with the dendrites of all postsynaptic neurons? And this way we determine a so-called structural map of connectivity. Of course we know that, I mean, all-to-all connectivity is not possible. So therefore a fraction of these structural contacts are actually converted into functional synapses. So we have some experimental data on the structural to functional proportion of synapses that we use to constrain this algorithm and then develop like a blueprint of connectivity in the microcircuit. So this was about the anatomy. So the physiology is of course equally mind-numbing as probably all of us know. So just as there's the staggering complexity of morphological types, there's also a complexity in terms of the diversity of electrical types. So we mapped out experimentally that there are about 11 different electrical types in a typical neocardical microcircuit. So there are stuttering firing patterns, there are accommodating firing patterns, there are fast piking patterns, etc., etc. So then having determined the morphological types and the electrical types, we then mapped out a proportion of what morphological type expresses a diversity of electrical types. So what we call a morpho-electrical type. Then the next step was to map out the synaptic diversity. So we connected neurons in this step, but then what's the language that these neurons use to communicate with each other? That we measured experimentally again and found out that there are about six different synapse types in the neocardical microcircuit. So there are three excitatory and three different inhibitory types based on their release probabilities, synaptic dynamics, peak conductance, etc., etc. And now, so having mapped out the anatomy and then the physiology, so low and behold, so we generated a virtual tissue that we were able to use to simulate and experiment and then look at the emergent dynamics of this reconstructed neocardical microcircuit. And just to give you an idea for the kind of work that went in to generate this small little piece of tissue in silica. So this really the output of about 15 years of experiments, thousands of neurons were recorded and labeled, many thousands classified, several thousand recordings of electrical and synaptic types that all went in. So this is really the digital reconstruction in its gory detail. So as I said, it's about two millimeters thick, about 0.3 millimeter cube in volume, it has 55 different morphological types, 11 different electrical types, 207 morpho-electrical types, the synaptic anatomy and the synaptic physiology. Okay, so this is a depiction of the morphological diversity in greater detail, the diversity that I spoke about earlier. So as you see, layer one has six different neuron types, which are all inhibitory. The pyramidal morphological types only start from layers two, three all the way to six. And the complexity of these pyramidal morphologies increases as you go from layers two, three to all the way to six. And the inhibitory morphological types, so I mean they are the same in terms of types from layers two, three to six. So they're martinotic cells, bi-tufted cells, double bouquet cells, bipolar cells, neuroglia form, VASCIT cells and chandelier cells. So how did we actually reconstruct the density and map the excitatory and inhibitory neuron fractions and the dimensions? So this is more of a breakup of the layer-wise densities and the number of neurons from two, three to six. And these are the individual layer thicknesses that we mapped out experimentally. And this is more of a depiction of the proportion of excitatory and inhibitory neurons across different layers. Of course, as you see here, layer one is 100% inhibitory and across layers two, three to six, roughly there's a proportion of 86% excitatory and about 14% inhibitory neurons. So this is a breakup of the composition, the morphological composition of neurons across different layers. So this is a breakup of the different pyramidal types which increase in complexity as you go from layers two, three to six. Okay, so to derive the connectivity that I mentioned earlier, so we came up with this fourth step three rule algorithm. I won't really go into the details of this algorithm, it's all published, you can look it up here. So the first step was to really identify, as I said before, all the axonal oppositions of a single neuron. That's all the possible touches that the axon of a single neuron forms with all the other dendrites that are surrounding the single neuron. So the next step was to really prune a number of these synapses which we call a so-called general pruning based on biological data followed by a so-called multi-synapse pruning. So we know that synaptic connections in the brain are mediated by multiple contacts. So it's pretty uncommon to have a synaptic connection that just has one contact. So on average in the neocortex, excitatory connections are mediated by about five contacts, inhibitory connections are mediated by about 10 contacts. So this step really then identified and pruned like many of these access oppositions such that we were able to match the profile of the distribution against experimental data on a pathway-specific basis. And the final step was to really prune this further to make room for structural plasticity and the reconfiguration of synaptic contacts. So this way using this four-step three-rule algorithm we were able to predict that there are about 37 million intrinsic synapses just formed by the overlap of intrinsic axons and dendrites in the neocardical microcircuit. 27 million of these are excitatory, 9 million of these are inhibitory. So yeah, I mean this is quite interesting. So this is more in detail as to how we predicted intrinsic synapses. So as I mentioned before, there are 55 different morphological types. So 55 square is about 3025 possible connections. So we know by virtue of axon dendritic geometry that not all of these connections are actually viable. So it's only a fraction of these that are viable. So out of 3025 possible connections just about 2000 are actually viable. And for these 2000 connections we have experimental data for less than 1%. So all the experimental studies out there have probably characterized about 20 or 25 of these 2000 connections. So we really had to come up with rules to extrapolate the sparse experimental data to fill up this whole matrix of the 55 to 55 possible connection types. So I guess I just have about five minutes. Well, so there's still quite a bit to show. I did not even get to the simulation part. So there's some that I really wanted to show. Okay, so I mean in a nutshell, so what we're trying to do is to really build a pipeline for integrative neuroscience as part of the so-called simulation platform of the human brain project. We're integrating the workflow that we've developed in the blue brain project to be made available to the entire world, the tools, the workflow, the algorithms, everything through the brain simulation platform of the human brain project. And of course, we're also making all the tools available. For example, we recently brought out this tool called blue pie opt. That's basically a Python optimizer available on GitHub to enable like data driven modeling of a single neuron physiology. And this is also recently published. And of course, more importantly, so I mean, this really the wall for any Pink Floyd fans out there. So we I believe are taking like a so-called middle out approach. So this is where data driven detailed biological reconstruction stand as of today. Of course, I mean, we've always faced with a barrage of criticisms. There's one camp that says we don't have enough detail that oh, it's too premature to talk about an in silica reconstruction, the connect home still is going to take a decade. So what are you guys doing? Are you out of your mind? And then there's the other camp that says, oh, you guys are crazy, you're too much detail. So this you have no hypothesis. So this is not science. You're just adding details willy-nilly. So you don't know what you're doing. And of course, I mean, many people also tell us that I've learned nothing from what you do. So it's probably meaningless. But of course, again, to kind of contrast, the detail on one end of the spectrum and simplified models on the other end of the spectrum. So what we're actually trying to do is to also come up with a with a procedure with a with a kind of a process, an informed process to move from complex models to simple models. So we've actually come up with a with a whole process to then systematically collapse the complexity of what we're trying to do into a simple point neural network. So this is more of the details of this process. And basically, so this is a proof of concept of validation to actually show that the kind of emergent dynamics that that we see in the complex network following the systematic simplification is also seen in the simple network. So, right, so set of general conclusions. So, so what we see is that inter dependencies in experimental data make it possible for a dense in silico reconstruction of a micro circuit of neurons. So simulations reproduce in vitro and in vivo experiments, which I couldn't talk about a lot without any parameter tuning. So and another conclusion from the study is that the neocortex reconfigures itself to support diverse information processing strategies based on extra cellular calcium lens, which I couldn't really talk about. So yeah, I'm happy to take any questions. That was wonderful. Great to see all this work. And I think it's essential to to put all these biological details together and by synthesis see what they entail functionally. I'm coming more from a sort of computational perspective where the phenomenology that I would like to understand is brain computation and behavior essentially, right. And so I think one thing that we have to achieve is combining the top down approach that goes from the overall function of the system and optimizing the function as people do in AI to the more bottom up and biologically driven neural network modeling. And one thing that I think I would like to hear your take on is what is the phenomenology that you're trying to explain? Is it so for me the phenomenology is the computation and the behavior and I'm interested in biological details to the extent that they help me understand the computation. So it's an empirical question for me whether spikes matter even. I think they do. That's my intuition, but I don't know that yet. And all the neuron types, you know, might all matter. There might it might be that there's no level at which we can say we can abstract from this. But how do you think about this and what is the phenomenology that you're trying to explain with your modeling? First of all, yeah, I think that spikes matter. I think so too. Well, the phenomenology is well, this is more of a, yeah, like a philosophical question. So all we're trying to do is to really play a biological limitation game. So we're studying a biological system across different levels of organization, looking at how it's all structured and then copying this level of organization in a computer. So I mean, one working hypothesis is that if you meticulously copy the biological structure into a model, then the emergent dynamics that you see would be consistent with that of a biological system. So I mean, in a way, this is really looking at the structure from bottom up and then rather than mapping the kind of emergent dynamics that comes out. So in that sense, this is a very broad phenomenology. But I mean, of course, we do have, we do capture certain other phenomenological details at these different levels of modeling. For example, the phenomenology of synaptic transmission that's captured by assuming that at every incoming spike, there's a fraction of resources available at a synapse to be consumed. And depending on the kind of a synapse, some fractions are, some fraction of resources are consumed and some aren't and so on and so forth. So I'm not sure if I entirely answered your question, but I mean, we could, of course, discuss later.