 Now I will try to give a brief description of the algorithm that we use in order to do the single-cell optimization. So we understood that in order to do the single-cell optimization, which is a kind of data-driven modeling, we want to find the parameters of a mathematical model, so it matches the experimental data. And in order to do this, so we inject an input current to the real neuron and we get the experimental data, and in the same time we inject the same input current to the mathematical model and we get our simulated traces. Of course, in order to test all the parameters from the parameter space, we need for computing resources. So we used the BluePiOpt algorithm, which is described in the paper in Frontiers in Neuroinformatics from 2016. And in the same time, it's publicly available on GitHub, so it's installable using the PIP manager in Python. And we'll see more details about this optimization algorithm. We'll see what's the simulator that it's using and how the feature extraction is done. You've already seen this and how the parallelization of this algorithm is done too. So BluePiOpt is a multi-objective evolutionary algorithm that relies on Python library, which is called DEEP, Distributed Evolutionary Algorithms in Python. And inside DEEP we have many algorithms, such as, for example, you might know the particle swarm optimization. And the simulator uses and needs an evaluation function in order to map the model parameters to a fitness score. So in order to know what to do from generation to generation, we need a fitness score. So BluePiOpt interacts also with some external simulators, as you have seen previously with Neuron, also with Nest, Pine, Brian, Steps, and so on. And you have seen that it relies also on an EFL library, so for the Electrophysiology feature extraction, which is also open source software. And the parallelization, so in order to evaluate individuals in a population on several cores in parallel, it relies on, actually it's not easy to see here, but it relies on IPyparallel, it's called. And the optimization algorithm works like this. So initially we have an initial population of size N of individuals. The individuals are the parameters that we want to test. And after that, there's an evaluation step. So for each individual, we calculate these fitness values and we test if our score is good enough. If it's not good enough, of course, we continue. So we do a selection step. This is just like every genetic algorithm. In order to generate, so we replace the population, in order to generate another population of individuals, and we perform genetic operations on this temporary population in order to generate a new set of individuals. And after that, we insert these new individuals into the population, and we update, of course, the fitness values of the individuals and we remove worse individuals until the size of the populations equals back to N. So this until stop criteria is fulfilled. So we get to the end of the search of the parameters. And the blue pie opt, of course, follows an object oriented programming model. So the software is modularized into classes. And there are few classes like model, morphology, mechanisms, protocols. I'll describe later what these classes are for. So stimuli, recordings, locations. These classes are specifically used in order to set up the neuron models and to assess their input output properties. There are other classes like objectives and features and the features. And the optimization accepts an evaluator object as input and runs the search algorithm in order to find the parameter values that generate the best objectives. And the goal of this algorithm is to minimize a weighted sum of objectives. And this evaluator object defines an evaluation function. We'll see later on how the objectives are calculated in order to map the parameters to these objectives. And, of course, starting from the model, we will need a protocol in order to attach stimuli, recordings to make it more real and usable through the code, of course. And when this simulator is run, a response is generated for each of the stimuli that we used. And then there's a location class, of course, that is created to specify the location on the neuromorphology where we want to set up the stimulus, of course. And, but, okay, sorry, just one second. I forgot to, it's a little bit bad view of the presentation. Okay, now it's better. Okay, so all the optimizations are available, but it's moving on its own. So, all the optimizations are available on this CSS container that Luca was mentioning before. And the code for setting the parameters, I mean, also in the live paper, you'll see this structure of the folders. So, are separated into various models. And I will describe each of these folders in order to let you see what's going on inside, okay? So, of course, the first step in the config folder, you'll find specified in a JSON file called morph.json, the morphology that we are using. It's an ask file, okay? And later on, also in the config, you'll have the features that already Rosanna showed you how they are extracted. And so, for each current step, you have specified the feature and the mean and standard deviation for that feature. And there's a code that's available inside of that zip folder that reads this feature.json and calculates some objectives based on the experimental trace and on the simulated trace in order to calculate the fitness, okay? And this is the way that the algorithm calculates the objective. So, for each feature value, it uses the mean, experimental mean and the experimental standard deviation. And the objective is calculated like this. So, in this way, all the objective scores are normalized to a common scale. So, we can combine them regardless of the units and weights these objectives according to the feature variability. And going back to the config folder, you will find also a protocols.json file where the protocols are specified. So, you'll see here for each current step what's the delay, what's the amplitude, what's the duration and so on. And in the same way, the algorithm reads this protocol.json and creates an object in order to apply in this example a square pulse, okay? Based on the step delay, duration and everything that's written in the protocol.json. Always in the config folder, you'll find the parameters.json where you'll see that some parameters are marked as fixed. So, they are kept constant through the optimization. And in this case, the parameters to be optimized are the maximal conductances of the ion channels. And of course, the location is based on the section list names of the morphology. In the way that neuron reads the sections when it loads the morphology. You see that it assigns also distribution to the conductances. And in the same way, based on these parameters and based on the mod files that are in the mechanisms folder, the algorithm reads these parameters and of course it appends to the model more mechanisms, sorry. So, you'll see here that the locations are specified based on the section lists. This is all Python. And of course, once we created this cell template, it's called, we need to create the cell evaluator object that I was talking about. Of course, this object needs to know which protocols to inject, which parameters to optimize, and how to compute the score in order to optimize the cell. So, this cell evaluator constructor has a field, which is called parameter param names, which contains the order list of names of the parameters that are used as input. And that will be fitted later on. You will find all these Python files inside the model folder. And in the end, once we have the cell template and we have the evaluator, we can create a deep optimization object and of course we can run it. So, in the previous slide, you had, as I said, called the simulation of the neuron simulator class. So, how does the code know, so for example, let's say you do a neuron simulator and then you want to change it when you write or something, or move, so whatever. How does it know the mapping between the parameters to describe the parameters file and the parameters that exist in the file that's native to that simulator's language? So, each simulator, so if you want to use another simulator, so not neuron but Brian, for example, this is the code specifically for neuron simulator, actually. Yes, because it's written to run neuron. So, if you want to... When the first and earlier slide, there's a list about simulator. Yes, it uses also Brian, but we don't have many experience with that. So, I don't really know how to switch from neuron... The platform is implemented using neuron for this kind of use cases. So, there could be like a neuron belt and a different cogeneration from that. Yes. In the next phase of the human brain project called SG3, which is going to start in April, we are going to generalize the simulation genes and the tools in such a way that they can talk to each other. So, for example, you're going to have the files that are going to be read from different simulators, and the optimizer will read from some standard files that allow you to run any type of this. But the choice is going to be neuron or nest. I don't know if it is going to be Brian, because this is single cell. We are talking about single cell optimization, realistic single cell optimization. Maybe we can add hardware. I don't know if you know anything about it, but it is some simulation engine, which is similar to neuron, but it is claimed to be more efficient, but there are no use cases so far. But during SG3, we also... And so, the use cases will also be able to run using that. But we are talking about the single cell realistic implementation. Okay, so deep evolves a population through consecutive generations. And for each generation, we have a set of offspring individuals that are generated from the parents. And of course, we have to specify the size of this offspring population and the maximum number of generations before running the optimization. So these are two important parameters. And at the end of the optimization, I mean during the optimization, the whole of fame keeps track of the individuals during the evolution process. So the population statistics are recorded in a so-called log book. And you can save the genealogy between the individuals and of course you can analyze and visualize the individuals at the end. And in this deep algorithm, check pointing is implemented in order to save the algorithm states in Python in a so-called pickle file. Okay, so inside the checkpoints folder, you will find at the end a pkl file. So here you have a screenshot of a Python simple code in order to read at the end this pickle file. So you will see that inside the pickle file, we have keys as generation, whole of fame, parents, log book history and population. And at the end of the optimization, you can find information about the population that was studied and the parameters at the end. So you see here that we had only two generations here. And we see that we had like 22 parameters inside the optimization. And we see those 22 parameters. Okay, yes. So by default, are you doing something? Are you doing a non-dominated sort? Or are you just doing like weighted sum? Weighted sum for now. Yes, yes. So for the kinds of simulations you're doing, do you think you're likelihooded? Are all the genes going to be convergent to one? Like spot the parameter space, you're still getting, in your whole of fame, you're still going to have a spread of locations and different, like, I don't know how well this maps onto the question of degeneracy, but, you know, having different sets of parameter drives, it can give you the same behavior. Does that mean you can observe with the different genes in the optimization or did it end all this cluster near the global maximum? It actually doesn't end to maximum. So it maps to the degeneracy problem. So you will find different solutions when using maybe the same parameters, okay? So if you look at the whole of fame, then you might have different sets. The parameters are clearly different, but you get roughly the same. Yes, yes, yes. In the whole of fame, we save only the 10 best individuals, but if you analyze those, you'll see that there are closed solutions. Do you tend to exist, like, on a continuous manifold of the parameter space or do you just get a pocket here and there's no pocket over here? Actually, we are still analyzing those parameters. So we have a collaboration with Israel. So they are analyzing what's happening with the parameters throughout the evolution and how the clouds are, yeah, clustered. But we don't have yet a paper published on this and we don't have an answer yet, okay? And at the end, we generate together with the pkl also a hoc template. So you can use directly this hoc template into neuron if you don't have Python experience, for example. And this hoc template contains the final parameters, so the optimized ones. So you can play with this hoc template in order to analyze the behavior of the optimized cell, okay? And at the end, you have this information in the figures file. So you see the traces generated by the best solution in the whole of fame. And you see also how the objectives scores are behaving, so the standard deviation of these objectives. And in the same time, you see how the optimization evolved, okay? So you have a plot of the minimal, maximal and average scores found during the evolutionary algorithm for this, I don't, maybe it's 60, yeah, 60 generations, okay? In this case. And later on, you'll see an example use case that it's described also in the blue pie opt paper. So we will play inside the collaboratory with a single compartmental neuron model with just two parameters. So we'll have the maximal conductances of the sodium and potassium oxygen axially ion channels. And we will play with this in order to reproduce just, for example, one spike and five spikes. And yes, we will see, we'll run it for two or ten generations like this, and we'll see the evolution of the objective sum during these generations. All this in an iPython notebook in your collab, okay? And we'll see later how to do, by code, how to set up the cell template with few lines, how to load the morphology, how to create the mechanisms and how to create the parameters, the cell template, as I told you, and how to add the protocols in order to inject the currents, okay? And then we'll see how to run these protocols on a cell and how to plot the response traces. And at the end, once we will define the features and objectives, we will evaluate the cell and at the end, of course, we will set up and run on optimization. And yeah, we will reproduce the behavior I was showing before, just one spike and five spikes. And that's it, we will see it later on an iPython notebook in the collab platform, okay?