 Thanks very much for having me here. It's a pleasure and honor to be speaking here. So I will be talking about a rather general approach for computationally modeling neural development. And basically I will start with some lines of collective work where I've been involved that has been published. I will not be able to go very much into detail just because of time limitations, but it has been published already so much of the detailed information is available online. So the modeling of neural development or biological development in general is the basic scientific question that underlies this biological development is about how genetic encoding or genetic rules interact with the local external environment of cells. So there are different kinds of influences in the local neighborhood of cells. There are physical forces, there are substances, chemicals, there are electrical activity inputs and so on, which then feed back again on the genetic expression. And so there is this dynamic interaction which evolves during development and basically starting from a single cell one can generate a very complex structure such as the human brain. Now if I want to model this one crucial condition is that the model should be self-organizing in a biologically plausible way. So basically the model should allow only for local information exchange, which means that if one has in this case cells that are agents, they are only allowed to interact with their local neighborhood. They can interact with other cells if they are close to one another or with the exocellular environment. But there is no global orchestrator that tells the individual agents what they should do. Now this sounds rather abstract so let me show you an example for this approach. So I've been working on modeling winner-take-all networks. So winner-take-all networks are interesting from a computational point of view because they are very powerful from a computational point of view. So there are a lot of studies around that in some form use winner-take-all networks to conduct some computation like for example finite state machine computation and so on. But they're also interesting from a biological point of view because there is a good reason to believe that in a cortical micro-circuit there is a winner-take-all network implemented for example in the superficial layers of cortex. And in an idealized way one can see the structure of a winner-take-all network as here. So they are excited neurons in red which excite each other in a neighborhood like topology. And they also excite inhibitory neurons which then feed back to the excited neurons. However as we know the reality is not as in this idealized scheme but neurons don't have this clear structure in terms of connectivity and their synaptic conductances. So one question that arises is how could a winner-take-all network that approximates this connectivity scheme arise during cortical development in a biologically plausible way? Now in order to address this question we have been using a software called Cortex 3D the Java-based simulation framework to simulate the development of neural tissue and it allows to take into account genetic rules and physical mechanisms in 3D space. So here I show you now an example of a simulation where the color is not very clearly visible but basically there are two different types of neurons, excitatory and inhibitory neurons and they are randomly, initially randomly scattered and basically they grow out axons and dendrites which follow certain rules, developmental rules. And basically in this case the rule is that an axon grows out and it consents a chemical that is in exocellular space and which is secreted by the cell bodies of its potential targets that it's aiming for. And whenever the concentration is below a given threshold the axon will retract and grow out in a slightly different direction and so basically this rule specifies that the outgrowth is controlled and aiming for their potential targets. So whenever the color is green basically the axon is in a retracting state. Now you can clearly see here that we started from a very, very simple configuration just randomly scattered excitatory and inhibitory neurons and you can see that after some time a rather complex structure emerges and we also implemented the formation of synapses, so the establishment of connectivity whenever an axon and the dendrite are close to one another there is the possibility for a synapse to be formed. Now we have simulated larger networks and looked at the connectivity that is generated from this and compared it to experimental data from this publication here from CAT Visual Cortex and indeed we can get the connectivity that is proportional to what the experimental observations suggest so basically these are different types of synapses, excitatory and inhibitor synapses to single excitatory or inhibitory neurons and this is the number of these synapses on average and as you can see here the statistics are proportional to what one measures in the experiments the absolute values could not yet be reached because of computational limitations so this shows basically that the structure of the network gets into the right regime where we would like to have it however the functional aspect of the network is yet unspecified because basically the synaptic weights have not adapted, have not learned and so we have looked at how the network functionality could self-organize in such grown networks and in order to do that we basically here have the network that is not yet a win or take all network it's basically grown in Cortex 3D and then we have input neurons that connect initially randomly to this grown network and then we simulate the electrical activity of the neurons using a rate-based approach so a linear threshold neurons approximation of electrical activity of the neurons and we also simulate learning, we use the so-called BCM learning rule from 1982 it's a very well-established learning rule which is Hebbian type and also it is homestatic so basically during this learning phase we applied patterned inputs to the input neurons similar to retinal waves like activity there are hills of activity which propagate along the input layer which could be for example the thalamus or the layer 4 neurons that project to layer 2,3 and during this phase of electrical activity, patterned electrical activity all the synapses so from the input to the network but also within the network are learning the synaptic weights according to the BCM learning rule just keep in mind the only difference is that in Hebbian neurons so synapses which are 2 in Hebbian neurons they are following a simple homestatic rule not a Hebbian-like rule they just basically try to reach a certain average firing rate but synapses that are made 2 in Hebbian neurons whether they are excited or inhibitory are not in this scenario following a Hebbian type learning rule and I will tell you later why this is relevant so basically we simulate with this pattern input until the network converges to a learned state and then we looked at different aspects of function in those networks and here you see on the top bar the individual neurons so the x-axis is the neuron ID of the neurons and on the y-axis is the input that these neurons receive and in response to this input this is the output so here you see that there is a larger hill-off activity for this population here and those neurons here receive rather noisy input they are not strongly activated compared to those here and here you see now that the output of the network after the network has converged in response to this input you see that most of the neurons here are inhibited strongly while those here are not inhibited some are even enhanced so the gain basically which is the activity divided by the input is smaller than 1 and sometimes 0 and here it can go up to about 2 so basically the signal is improved by this network because the network has learned the topology from the input neuron so now we have also here this neighborhood-like topology because of this BCM-like learning for the excited neurons we also, I cannot go into much more detail but in this publication we also looked at other aspects of those networks like for example stability and robustness of activity decorrelation and so on and they are all in agreement with the Winni-Tecal functionality now another aspect we looked at is the feature selectivity so if we train the network with different orientations then because they learn the neighborhood basically some neurons get selective for certain orientations while others for other orientations we get that the excited neurons become orientation selective so here you see three exemplary neurons that have learned to become selective for a certain orientation while here the inhibited neurons are very broadly tuned because they were not following this Hebbian type of BCM learning rule so the synapses were simply undergoing synaptic scaling and this is actually something that also was mentioned yesterday in the talk of Claudia Klopath this is basically in accordance with observations from the mouse visual cortex where inhibited neurons are broader tuned than excited to neurons so basically this follows from our model by taking into account different kinds of rules of learning for excited and inhibitory neurons and we also looked at orientation selectivity so indices which basically quantify how selective a neuron is to a certain orientation and as you can see here after learning the excited neurons are very selective while the inhibited neurons are broader tuned this is also in accordance with the hypothesis that spines on the dendrites allow for the compartmentalization of biochemical signals and for Hebbian type of learning and since we did not allow for Hebbian type of learning the inhibited neurons did not do it so they don't need any spines in this scenario we also then just to see whether it works also if the inhibited neurons follow the Hebbian type of learning we also simulated this and we can get in this case also orientation selective inhibited neurons if they were following the same rule as the excited neurons so this was basically one line of collaborative work that I've been involved in but as this approach is very general one can apply to lots of different kinds of phenomena in the brain and more recently I've been working on layer formation so in the retina for example there are different kinds of neuron types arranged in a layered fashion there are two plexiform layers where the axons and dendrites of neurons are and then there are three nuclear layers where the cell bodies of neurons are located and so we have simulated some preliminary work where this laminated structure of the nuclear layers arises and soon we start working on a fellowship project where we will include different kinds of data sets like gene expression data and morphological data in order to have a very detailed model of how retinal neural structure can develop from a very simple initial setting and also this has clinical relevance for example we've been looking at layering in pathological neurodevelopmental diseases so this is some follow-up work on work done in the lab of Ron and Douglas in Zurich at the Institute of Neural Informatics and basically we have been looking at different structures of the layering in different diseases so for example in subcortical heterotopia cells don't migrate properly to their target destination where they should migrate in a normal scenario so basically here you see lots of differently coloured cells because they did not migrate up to their target layer and so basically our goal is to have different kinds of neurodevelopmental diseases to look at the structure of these cortices and see what kind of computational models can explain what we observe from those patients so this is the long-going work and there is also a poster which should still be there it's poster number 21 so if you're interested just have a chat with me later about that now since these systems can become very very large scale systems and also complex because there are a lot of different interactions we have established a collaboration with CERN OpenLab this collaboration is called the Biodynomal Project and also Intel is involved as a project partner in this project and basically the goal is to have a software framework that is inspired by Cortex3D but which is scalable to use cloud computing systems and which also leverages physics engines that are similar in performance to physics engines used in the gaming industry so we are still in the early phase if you're interested or if you think this could also be useful for your type of research please get in touch now finally since I'm anyway in the advertising mode we are organising a conference on computational neurology so basically it's at the interface of what is clinically relevant and also uses computation models if you're interested please have a look at this website the abstract submission is now open so finally I would like to thank my collaborators who have very much or crucially played roles in this work and also here there are some publications that are relevant to what I just told you about and where you can find more information about this thank you