 Thank you. So my name is Simon and I'm here to present a Python toolbox that we have recently developed at the University of Oslo for performing and certain modification and sensitivity analysis on computational neuroscience models. So to give a brief overview of what I will be talking about this presentation, first I will try to motivate why we need to perform and certain quantifications in the first place. Then I will talk a bit about what we get from an uncertainty quantification as well as what a sensitivity analysis is. And then I will end with a specific problem that crops up when we perform uncertainty quantification in neuroscience and how uncertainty is tailored towards neuroscience and solves this problem. So in this presentation I will use the Hodgkin-Axley model as an example model and here you see a typical evaluation of this model. When it receives a step current you get some action potentials. And here you have the equation that describes this model as with all computational models. You have a various set of parameters that describes the system that you want to model. Here you see a few of them marked with color. So I will focus on three of the parameters of the Hodgkin-Axley model and namely the ion channel conductances for the potassium, sodium and leak ion channels of the model. The Hodgkin-Axley model has other parameters as well that can be set, but I will not focus on those. So normally what you do, you have created your model or found the equations that describe the system and then you perform some measurements that either directly measures the parameters you have or you have to fit your model to the measurements. So you change the parameters out and try different sets of parameters until your model reproduces your measurements. The problem is that the parameters that you have are generally not fixed. They do not have one exact value. And there are many reasons for this, but to mention a few one is that we have measurement and certainty. So whenever we perform a measurement or an experiment, we always have some kind of uncertainty associated with our measurements. And we generally do not take this into account when we use or create our model. Another reason which is really important in biology is that we have some type of biological variability. For example, the parameters can change with time. They can be regulated by the system itself. So if we measured a specific parameter, now we would get a slightly different result than if we measured it say two hours ago. Another cause of biological variability is that we have in the case of ion channels, many ion channels of the same type within a cell and each of these ion channels can have slightly different ion channel conductances. In a similar way, when we have a network, we can have different parameters for each neuron of the same type in the network, but this is generally not taken into account. So you can of course also have all of these effects at the same time and all the various causes of uncertain parameters. So all of this leads us to the fact that often the parameters of our models are best described by using a distribution for the parameter instead of a fixed value. And this is where an uncertainty quantification enters the stage. When we perform an uncertainty quantification of our model, we are able to take the effect of uncertain parameters, the parameters that have distributions into account. Traditionally, what is done is that if you perform the measurement, we choose the mean and then we use that in our model and get a single deterministic result out. When we on the other hand perform an uncertainty quantification, we are able to take the entire parameter distribution into account and instead of getting a single deterministic result out, we get a range of possible values for our model. It is therefore much easier to see or find out how much can we test our model result. So this is where uncertainty comes in. As I mentioned, this is a Python toolbox for performing these uncertainty quantifications. It uses the very efficient polynomial chaos expansions or the more traditional, slower, but more robust Quasi Monte Carlo methods. I'm not going to go into any details on how this is done. Instead, I will just skip ahead to see what type of results can we get when we perform an uncertainty quantification. Here I have performed an uncertainty quantification of the Hodgkin-Huxley model that receives a step current. One of the statistical metrics that the uncertainty quantification gives us is the mean and the standard deviation or the variance of our model. Here you see the mean in gray and the standard deviation in red. So what can we observe from this? Well, one thing is that during the rise and fall of the action potential in the Hodgkin-Huxley model, the standard deviation is quite low, but it is greater at the peak of each action potential. Another type of metric that we get out is the 90% prediction interval, which means that 90% of the model evaluations occur somewhere in this grayed-out area. And we're able to observe the same thing as from the mean and the standard deviation. During the peak, we have the most uncertainty. So UncertainPy also performs a sensitivity analysis in addition to the uncertainty quantification. And a sensitivity analysis tries to quantify how much of the uncertainty or the variance of our model is caused by the uncertainty in each parameter. So basically what we are trying to do is to assign blame. The sensitivity generally sums to one, which means that we can say that for each point in time, the third parameter here is responsible for 60% of the model variance. To show you a result for the Hodgkin-Huxley model, here you see the sensitivity of the potassium conductance and if we add the confidence interval just to easier see what when things happen, we can see that potassium conductance causes the most variance during the rise and the fall of our model, while during the peak of our model, there are the action potential, there are almost no variance that is caused by the potassium conductance. If we add the sensitivity for the sodium conductance, we see that as we perhaps would expect. Sodium conductance is responsible for variance during the peak of the action potential. So by performing sensitivity analysis, we can gain additional insight into our model and it can help us determine which biological mechanism are affecting our model at each point in time. So it helps us understand or potentially can help us understand what's going on. For completeness, let me add the leak ion channel or the sensitivity to the leak ion channel conductance and as you can see that is not very interesting. It's at the bottom there in purple and it's the leak ion channel is not responsible for any variance at all. This result here is quite typical. Most often there's only one or two parameters that are responsible for most of the variance of our model, while the rest is responsible for almost nothing or absolutely nothing. Just note that I said the sensitivity to normal is 7 to 1. This Hodgkin-Huxley model here had 11 and certain parameters and I only showed you three of them. So these three do not sum to one. But the fact that the model most often is sensitive to a couple of parameters is really useful both in guiding the experimental focus as well as guiding the computational focus if you want to develop our model further. For example, if you want to reduce the uncertainty of our model, we can go to the lab and we know that we need to focus on measuring the potassium and sodium conductances very accurately while we can ignore the other parameters. Similarly, if you want to reduce the complexity of our model or if you want to or need to fix some parameters, we know that leak channel conductance, it's not very important at all. We can set that at fixed value and it will not change the variance of our model much or in some cases it might even indicate that we can remove mechanisms from the model. So, since things never are as straightforward as we would hope, one problem that occurs when we perform a certain quantification in neuroscience is that we get a huge variance due to small shifts in the spike timing. Here you see another neuron model that exemplifies this problem with three evaluations with slightly different parameters and when we want to find say the variance of our model, we compare what is the membrane potential to each point in time and as you can see the difference here is really large since this, which for biological intents and purposes is the same spike, occurs at different times. So by performing an uncertainty quantification of this model, we can see that 90% prediction interval gets really big and it's hard to draw a conclusion on what happens in this at the point of the second spike. The solution for this is quite simple. We can do as when we perform a parameter estimation. We can look at features of our model in addition to the model result itself. For example, if we count the number of spikes, this does not change due to small shifts in spike timing and such features are often much more robust. So, this is what we do in UncertainPy. We have a set of features that the user can choose from and then calculate both the uncertainty and sensitivity of the model and or the model result itself and all the selected features of the model. Here you see just a small set of the features that are already implemented. So, this is one way that UncertainPy is tailored toward neuroscience. Another is that it has built-in support for a couple of neuroscience models such as multi-compartmental models in the Neuron simulator and network models in the Nest simulator and implementing custom models is quite easy and implementing custom features are also quite easy and a set of features for both network models and multi-compartmental models are already available. So, to quickly summarize what I've been talking about, most parameters or many parameters in neuroscience are best described by distributions instead of fixed values and performing uncertainty quantification enables us to take this into account. So, UncertainPy is toolbox that perform all the calculations for you. So, you do not need to know how to perform an uncertainty quantification and it is tailored toward neuroscience for example through the calculation of specific features of your model. So, if you want to check it out, it is open source and I find it on GitHub. It's quite easy to install through PIP and it has an extensive documentation with several examples. So, it should hopefully be easy to use. Additionally, we have just gotten an article accepted in Frontiers in Neuron informatics which goes into details on how the uncertainty quantification is done. So, if you want to know more, come talk to me at my poster tomorrow. And thank you for your attention. I'm open for questions. It's nice to see the quantification of the different parameters and the impact that they had, but don't you need to know when you're, so when you introduce it, you kind of lump together heterogeneity in parameters and experimental uncertainty. Don't you really need to separate those out before you then apply this method? Well, you both need to and you don't need to. It depends on if you want to know the uncertainty just due to measurement uncertainty or if you also want or you only want to know the uncertainty due to, say, biological variability. But you are able to do this in uncertainty. You only, what uncertain by takes is the model, of course, and the parameters that are uncertain and then you can just define or give it the first parameters that are uncertain due to biological variability and next you can do the parameters that are uncertain due to measurement uncertainty. So we're able to split it up if you want to. Okay, but so in a related question really, does it, can you deal with correlations between uncertainties? Yes, you can define multivariate dependent probability distributions for your parameters. Okay, that's great. Very nice. I thought it was really interesting. So I was reminded a bit of a recent talk by Eve Marder and it's kind of a theme that she goes to sometimes. Don't model the mean. I don't know if you're, you know, familiar with some of her work where she's looking at large changes in parameters, you know, big parameter sets and showing that you can get different kinds of behaviors at the, you know, at different, I'm sorry, the same behavior at different points in parameter space. So I just want to make sure I understand what uncertain by does. You're starting from like an already optimized model. You're thinking about a set of parameters that give you the behavior that you want and then that's what you consider to be the mean and then you're kind of going from there with the uncertainty analysis. Is that right? Yes, what uncertain by requires is that from beforehand have the uncertainty of your parameters either through measuring it or to a parameter fitting but that you have the uncertainty of each parameter and then it considers the model as a black box. This is really, I think it's really critical what you're doing this, you know, understanding the things. I think it's fantastic. One question as a biologist, I like to understand. So many of the parameters in real neuronal networks are log normally distributed, right? And how do you capture that kind of variability where you have some, you know, very rare, let's say, synaptic sizes of firing rates and things like that, you know, that would not be really captured by let's say standard deviation. So that is I use the standard deviation, yes, as an example of a distribution, but you have an infinite possibility of different distributions to give to your parameters. The package that I use to calculate or perform many of these calculations has support for 64 different distributions built in and also support for creating your own and easily create your own multivariate distributions. So you should be able to give them or use whatever type of distributions that you want. When your simulations themselves have stochastic elements, the output inherently has a fairly wide or can have a very wide distribution. So have you got a way of handling situations like that? What do you mean the range that the range that you get out is really large? That means that every time you run the simulation, you're going to get a different result. Yes, and this might follow a very, very non Gaussian, non normal distribution. That should not be a problem. As long as you get an output back that and certain pi is able to handle currently, if you have a say multi-dimensional output, it has not support for that. But if you get back C1d to D dimensional output, it does not matter how much each model evaluation changes from one run to the next. If that answered your question. I mean does your system then, I mean do you have to explicitly tell it then that each run of this is going to do something different and then you have to find some way of analyzing the statistics of the of the model itself. Is that what you're saying? So all that you need to define or make sure for the mold run is that you give it some way of ability to set your set the model parameters that are uncertain. So as long as those can be set as specific numbers, that is chosen by and certain by when it evaluates a model, it does not need to know or it does not care what else your model actually does. So then it can has as many stochastic elements as it wants as long as you return a result. That should not change the output much. But wouldn't, okay, maybe we can talk about it offline, but you might get I think I know what you mean. But since you have some random element of your model that can affect the output even though we have the same parameters that goes in. Yes, that's correct. That can be or that will be a problem. So then it's best to be able to set the seed to get precise results even though that really influences your model. Well, it's not I mean, okay, we can perhaps discuss it offline because there is no precise result for a model of this kind. You can only get a distribution. Yeah, and that will be any given instantiation of the run will give you something different and that's it. These all equally legitimate outcomes of the model. Then that will probably be a problem. I haven't tested it, but I think that will be problematic, yes. Okay, if there are any more questions, then thank you very much Simon.