 Na različne počke sem početil, da je bilo Fenix reserčna infrastrukačna, kaj je tukaj superkomputovana infrastrukačna, na kaj je tukaj projekta vseh. Tukaj je infrastrukačna, kaj je vseh superkomputovna centra, v vseh v Barcelonau, Svitserlandu, Lugano, Julek, Chineka v Italiji, in v Francji je in otroj. Vse zelo je zelo vse z vse zepraviljne, kaj je vse, v Swiss Supercomputer Center, ki je zelo vse zepraviljno vse zepraviljne, nekaj. Vse, da vse zelo vse zepraviljne, izgleda, da vse resursi nekaj kompetencij nekaj, na paraleljnje procesu, ker programe, kaj je kod obtivizovati, je odličnjeno v superkomputeri, tako, da je nekaj površen, nekaj nekaj površen na superkomputeri, to je nekaj nekaj izgleda v HBP. HBP površen na pletformu, But still you need an account on supercomputers to run your programs. The tools that are in the brain simulation platform you will see, they already have a preconfigured supercomputer access even without an account because we have organized a thing called service account. Which means that we have open a given amount of resources to everybody in posedvalo seštje, ki se v semologi opotravila, in naredujemo vse vsak eskoli. Reizobljelo je až iznočno, čaksem vstajalo, da se nakočilo in inčnogočke vrnev. Zatukam, da so lepši, da je neko v svačkjih svali opotravila z evočnja, da vse je začo investirujeli in odsilimi. Vždeš, da je to, da je teachznje. in veliko komputenja. Zato, kako je bilo vse infrastruktur. Tudi in v human brain projekti. Ne treba bilo in in in v human brain projekti, da je bilo vse resorce, tako ne bilo vse vse vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, ali ne bilo vse, Pristim vse je, da se je vse vse skončno zelo v zelo, skončno je, da se je za vse. Pristim je zelo, da se je neselila in vse nekaj delo nekaj milijonsovite spodovlja, nekaj nekaj nešeljših modelov. Selo, da nekaj ne kaj si, da nas zelo izvajen, tlebo se počelimo do vseh projekti. Zelo, da se jih bil in iz naša iskodnju, that is willing or needed some large scale model to run manufacturers so we can collaborate, or you know that you can go through their praise. And well, for HpP is much more easier, because we just signed in applications, it is reviewed internally and if it is a technically sound we will get the allocation that the job.Just to da ga te vstupaj ško je učnešila z našrem,Sba te držete, in v punji tegašeljek kršinje nerem, da nešto pravno ne nešte, nešte 1,000 hodin sprem. folijšte 1,000 kora hodin sprem, jasnii čične plikacije. Kaj da tegašeli učneato, način barem v vsej štahvih deljih prišli, li ja sem volo s parameters 1 sekund, je tudi nekaj, da se rečjem. So, je bilo počkega aplikacija. Svajem tudi. If you have an equation before we start with the actual course, I will be happy to answer. If not, then... I can give it to you, no problem. Or we can put it in the website. v webičnih, nekaj včetnih, nekaj. OK. Če je nekaj? Luka, za infrastruktur. Čekaj. Zdaj. OK. Zdaj. Zdaj. Zdaj. Zdaj. kaj je težko tudi škranje, kaj smo je pročočite v branske in simulacije. Branske in simulacije se včak je zvala tudi nesetetov, način, rekonstrat, simulati in analizati modeli, datadriveni modeli. To je vse večavno z štočenih branske branske projekte, which is the humanbrainproject.eu website. You just go through these little panels here. You have a brain simulation. And if you go down here, you have a direct link to the brain simulation platform. So the brain simulation platform is integrated in what is called the collaborative the human brain project, which can be seen as a sort of a collector of work spacesibilitiarko, da zelo je imeliti naprejmovati, projekta uspešnje, linje, ki po tega skupajne, zelo se odmahaniro, harness, vzvalo, prejda je noženje v terkumpi, počelite svojo v Stupid in v tudi kolaboratori eni prejda. Kolova prigovoritino splobita bilo svojo naraz veča obok. tudi vnečiče koli, nekih koli maintenant. V svoj svoju vse poputna haljama, da sem možem tega, ker pa drugi svoj svoj sva pletrem, jaz plugo na tukaj koli, kaj smo počutili. A da vse ovedali včetke pristine, sem prišljila in potem se dolobila. Tudi tudi vse konformace, tudi evrolaj, je vse samih koli. Če so tukaj očen, na kabilite mi je tukaj, če je tukaj, ki je tukaj, ki je tukaj, prišljaj se očen po milijenju, se tukaj, ki si sem však tukaj, ki je tukaj, kako je u sva, očen za leženješko vera čeh, če je tukaj, začel, očen za, očen za, očen za, očen za, ki je tukaj, in izgleda se na toj topiki, spesivni topiki. Danes jih imaš nekaj pomoč, ki jih možemo zvukovati, svojo forum, svoje zvukovati, vse, kako želiš, nekaj zvukovati, za nekaj zvukovati. Danes jih imaš taj kolabs link. Zato, kaj jih vse kolabs link, jih imaš vse kolabs, kaj jih imaš zvukovati, ki se ogledaj propečno začali dito, in neko jo s nami okternječnih plapljav, mašlič, kteraj skupajte omelj freue. Vse ga pošli, da masi, da bi ste košneh plavci, v klipu složite kola, včeš, ki pošličijo plaf measures. Zdaj pošlič jel stajanom pa bi se odpravilo, če se pošličimo zelo nekaj, hvistih kolaupov, kaj hvistih svojeh prihvistih kolaupov. Što, da ješ nekaj, kot jih se všetit, da jih se prihvistih kolaupov z mjeljami. Jih pošliči pošliči na taj hori. Zelo, sem kratil kolaupov, nekaj, ga se zelo, in cf 2019 vsp. Tko se prihvistih kolaupov, And I create this collab. There's an error. So just put a comment here on the description. And this is the collab creation. I create this collab because we will use this during our demonstrations. So you can just write down the name, go to the collab and look for this name and you can access it. And all the tools that we're going to put there. So every collab has at the beginning these three items that works like this. You pick on an item and you visualize the content of that item in the central part of the window. Ok, this eye frame. The team, which is at the moment only me because I'm the one who created the collab, just created the collab, then you have the storage. And the storage is a space, physical space, that you have per each collab where you can put your data. Data documents you can upload and everyone who access that collab can download that data. Ok, the settings is less important, but it's just where you want to change the status of the collab, whether it is public or private. And then, as a final comment, you have a chat window here. So if you're working on a collab and you want to ask something because something is not working or if you have a question, just write a message there. And all the members that are working on that collab will read the message. Ok, so let's close this window and go to the actual brain simulation platform. So the brain simulation platform has different items. And we organized the brain simulation platform in order to be, let's say, designed around what we call the online use cases. So online use cases are realistic scenarios, let's say, selected procedures that you can encounter while doing your research in neuroscience. So either it is data analysis or modeling or visualization, we thought about what the scientific community can face while doing the research and being created based on our work this set of use cases. They are grouped by topics and these topics go from, let's say, the smaller to the bigger. So the molecular level, the subcellular level, and then going up we have an item which is on the trace analysis, basically data analysis on electrophysiological traces. We have morphology analysis, single cell binding, and so on, so forth. So for every doubt that you can have on a single use case, if you want to see how it works, if you have some question or you want to know some more technical detail, please refer to the guidebook. So the guidebook is just a document which is an interactive document with all the links to all the use cases in which you pick up the use case you are working with, you click on these, and you just read how the use case works. So in the brain simulation platform you also have the overview which is basically the introduction to the platform. So if you are a little bit lost and you cannot place yourself anymore, just go there and find useful links to all the use cases. And I just want to say before starting with the online use cases, this folder, the models folders, which collect basically documents to the brain regions that we are implementing in our modeling work and in the online use cases. So these are just documents, for example, if I click on this, on the hippocampus one, but can be pretty useful if you want, for example, some reference to the literature or the work that we are doing or you want to understand a little bit more on the hippocampus, just please go to this models folder and pick your document. OK, so online use cases. The online use cases are basically implemented in two different ways. The first one is web applications. Web application is, you can think of web application as an extended website. So you have a website, you click on things and you open other documents where you can read information. Web application is something that goes a bit beyond because it does some operation on the back end. Typically, what you have, the interface to your bank account is a web application because you click on the login, you enter, you want to do some movement and there's a database behind that. There's some code that does some operation on your account itself so that with a single click you have some back end operation. And the good part, point, let's say, in web applications that are very easy to use, friendly they are just point and click. The other way that we use to develop use cases is Python code and more specifically, Jupyter notebooks. Have you heard about Jupyter notebooks? So Jupyter notebooks are just, I'm going to show you one of these, are just interfaces where you put your code and you divide your code into cells so that you can execute single blocks of code. The good part of this is that the code is visible so you can change it. Let's say that the bad part, so the cons of this is that they're not as friendly as the web application. But of course it really depends on the kind of job you're doing. Probably if you're a biologist and you don't want to know the details of the code, you will go for the web applications. Otherwise if you want to play around, you want to do your own analysis of the notebook and play with the code. So starting from the bottom as I was saying we have the molecular level with two use cases on basically protein interaction. We have subcellular level. This has been finalized in the following weeks on a subcellular interaction between, let's say subcellular is going on inside the neuron at the proteic level and we have for example the trace analysis where you can find use cases for analyzing electrophysiological traces and Rosana is going to show this and also for example to fit synaptic event starting from electrophysiological traces. So how do use cases work? You have the list of use cases and all these use cases refer to the trace analysis. Let's say I want to work with the feature extraction use case. So I click on the panel feature extraction, I accept the terms and conditions for using that use case and basically now I want to clone the use case onto my collab. So let's say I have created my own workspace collab, which is the one that I just created. I put the name of the collab and I clone this one into the collab. The other option is to create your own collab at this time. Let's say you want to start from a brand new collab you put the name of the collab here and you just clone your use case. So in our case we already created a collab for this course INCF 2019 I click on this and I am immediately redirected to the collab but now, as you can see there is one item more which is the feature extraction graphical user interface. So every time you want to work on this you come to your collab and you start working on this web application. So this is a web application Arizona is going to show it so I won't go through this but I want to show you the other type of use cases that we are developing which is the Jupiter notebook. So the synaptic event fitting is one of the Jupiter notebooks that Carmen has developed and again I added it to the collab that we previously created and two more items are here now the synaptic events fitting itself and analysis on this use case. So this is how a Jupiter notebook looks like. You have all the code if you want to change something here just go and change it. There is a nice package which is called iPiWidget and Carmen has done an extensive use of the iPiWidget to let's say beautify a little bit your Jupiter notebook if you want so if I go to cell all the cells that basically means I execute all the code all the code disappears. So thanks to this package this iPiWidget if you want to put some button and play around just proceed like this when this circle is black this means that the kernel behind the python Jupiter notebook is running when it is empty basically it has finished and it is almost done I think it will take like 2 or 3 seconds more you can choose. By default you have python 2 so now the circle is empty and you have the basically the Jupiter notebook here as I was mentioning if I click here again ok and I can close the code oh no it doesn't work ok that's fine so I have to run it again ok but it was the wrong button ok that's fine that's fine ok anyway this is just to show you how the use case are cloned in the ok so we have seen the trace analysis we have the morphology analysis which basically at the moment has 2 use cases the morphology analysis itself and the morphology visualization the morphology analysis helps you thanks to a python package which is called BlueM helps you to basically analyze a morphology of your neuron for example you may check whether every node of your morphology as parents or the number of bifurcation or you can just analyze the morphology to see the neural length and so on so forth I don't know the use case you will see this the interactive tutorial video it will explain you step by step how to use a use case it's a movie in which you make choices and you would be directed to the right place to follow so basically this icon helps you understand what the use case is so the user of the use case is intended for everybody means basically everybody so this is for example the morphology visualization is point and click I'm gonna open this right now this one is the level of the maturity level of the use case so beta means it can be used experimental means it's still in the hard phase of development and as Michela was saying if I click on this interactive tutorial a video will open where you can just see how the how the use case works there are a lot of complaints about gender issues so we need to so you basically choose the voice well I'm not sure yeah no it's not it's only more so you have subtitles select morphology and all and we are filling we are basically filling all these use cases so providing all the use cases with these video tutorials so with time you will find more of these video tutorials with respect to the morphology visualization just to show you an example of how it works you go through the list of models and you visualize the morphology of that model and you can just play around this of course we don't have time to go through all the use cases but I just want to give you an idea of how it works and how you can work with them so the web applications are synchronized because web application are central so you have a server which is running the web application and so you clone that web application and that will be up to date so going down not with this one so you cannot upload your own morphology with this one at the moment but the object can actually you can upload your own model but we will see this later you can upload your own model probably you can be able to visualize the morphology but not with this one there is another app so tool which is called morphology and you can try to play around because with that tool let me see whether we are able to so I go to all the colabs I put the name on this one and here in the colab you have the possibility to add the tools that are being created so either jupita notebooks as we are saying a web application so if you click to add here you can filter the tools you have many and I think you have a morphology viewer here add to navigation so you can upload your own morphology so with this web application you can just browse it I am not sure which formats it accepts whether SWC is working or ACS but you can just test yours and see where it works probably in your case this will work because you can read here use the bar button select the neuron and the example is SWC but if you have other other formats just let us know so we have a single cell building family use case family here again we have a web application a few jupita notebooks not only on the hippocampus but also we have the cerebellar one we have the striatum if you are interested in that and the procedure is the same going up from the single cell to the other we start building a circuit so we have use cases for cell placement cell placement is basically based on anatomical constraints you put your cell in a specific volume and the connectom also based on constraints you just drive the connection between the neurons that you placed and once let us say the single cell has been defined or the circuit has been defined you can start with the actual simulation I will show this use case this afternoon because this is for the single cell in silica experiment so you have one single cell and you want to run a simulation on that cell by stimulating and recording from the cell itself and I will also show you this use case which is the small circuit in silica so in this use case we pick the cells that we are interested in and we run the simulation these two use cases are basically available to everyone because you don't need HPC resources for the brain area circuit in silica experiments that I will also show you because it is a big effort that we are doing and we want to provide the community with all this work you can simulate an entire brain area as we were saying at the moment you need an HPC system What is the power users and how? Power users is a broad definition you basically I would synthesize by saying you have to know what you are doing because some of our jupitan notebooks are done some of our use cases which have power user icon they are jupitan notebooks so the code is accessible there and you play around and you don't know basically what you are playing with probably you will have useless results developers developers is mostly closer to people who know basically the code behind so you know also the technical details power users is probably related to both science I consider myself a power user and I consider look at the differences that I know what I am doing what I want to do what I want to extract from the model from the scientific point of view I don't care about how the simulation is actually made in parallel to run on a specific supercomputer loading all the tools and the libraries that the compiler that needs to be implemented they take care of this what I mean with the simulation engines that is working and installed on all the HPC they do the work creating the use cases in which I can launch my simulation but as a power user I know what the hippocampus is how it is connected what are the different populations how they behave from the neurophysiological point of view how they are connected, the synaptic transmission what kind of model how they are connected what they want to use all this information if I know how to manage all this information I am a power user because I also know some coding in which I can implement the simulation independently of accessing the platform or not I can do, I can run my simulation also by accessing directly the supercomputer and run my code there how to install the neural in parallel to run on the computer I don't know if this is clear this developer starts they are still not hidden they are visible these things they are attack developers that's not only for developers they are for whole use cases they just attack developers so when you seek here a power user it means that you need to have some experience no, I mean the attack developers it's not hidden, it's just it's not only for developers of this no, sure, sure, sure it's for everyone play around, sure this gives you an idea of the entire use case so of course this does not prevent you to click on the use case it's just to say basically what we think the expertise of the user should be consider it as a warning if you enter that use case you need to know what you are doing otherwise I mean you can still run it but the results may not be meaningful before something can happen yes, yes we basically go through a process which is developing and then testing and then putting the use cases into production so we test ourselves, of course and we have a guy who is working in the EPFL Alexander Dietz in the Igor Krolz group and he's doing we call them selenium test so it runs all the tests with all possible configurations and clicking on the button et cetera and if something doesn't work of course we are going to fix it if you find the pack please let us know yes, if you just go here to the contacts page and you want to drop us an email so here basically all the links are in this github issue tracker where we put all the issues so you can also insert an issue there you just need a github account which is freely accessible you have the forum to the platform and then just write as an email at BSP their support if you want to ask something yes, we are filling this field for example for this one you have the email so if you want to directly contact the developer's chest write to him but what I suggest is just go through the BSP support because BSP support is a ticketing system so everyone is reading it and of course given that this is a collaborative work even if the developer is not there but someone else knows what is going on for that use case someone will answer your email ok, so basically we are done with the use cases this highly integrated workflows at the moment contains one of the use cases that we observe here also in the single cell building and basically highly integration means that as Michela was saying data driven modeling collection of data analysis so work done by different groups is put together into single use cases so I have 5 minutes left and I want to just cite a couple of items more so the online courses which is pretty interesting online courses maybe you have heard about MOOC these massive open online courses there is a very famous platform which is called Coursera and there is another famous platform which is called edX and these are platforms that offers online courses and in some cases also certificate if you want to register for that so these online courses here refers to online courses that are available on the edX platform in computation neuroscience so if I click on the MOOC initialization here and for example the first one the reconstruction simulation on neural tissue I have a link here, the link to the course and if you click this link you are provided with the page of the course itself so as you can see this course is provided by the EPFL it is about simulation neuroscience you have all the information that you may need you have the video here where the speakers present the course what the course is about so all the guys who participated in this course and with respect to the brain simulation platform basically here you have all the exercises that are linked to the lessons during the weeks of the course so the principle is the same I clicked on one of the exercise then I can again put our collab and also this exercise which usually is a Jupiter notebook is cloned into the collab so this cloning procedure is basically repeated for all the tools that we are developing and finally the last item that I would like to present is an item that is called lie papers so lie papers is basically a place where we are putting to all the scientific work that we are actually doing related to not only the brain simulation platform but let's say the human brain project in general so here you have a list of papers which are grouped by ear and you can click on one of them, let's say we take this one, the first one you can click on one of them and you have the useful information that you can use to for example download download the paper read the abstract of the paper you may not be interested in the paper you may be interested in the paper so you may want to have a look at the paper itself but what is important in the lie paper is that so we call them live because they not only provide information on the paper not only let's say they provide information that you can read but they also provide all the links to basically all the data that have been used to write the paper so this paper was about the physiological variability of channel density in hippocampal cells and of course to do this job we used different morphologies electrophysiological traces we build a model, we run the model and so on so forth so all these data are accessible through these links for example if I open the morphology I can but also giving that the idea is that we connect all the tools that we are developing together I may also want to view this specific morphology which is referenced in the paper and have a look at it and this is also for the other data for example a nice interface is for the electrophysiological traces so we use these traces to optimize the model of the paper and I can select a segment of this trace and every trace in this case has two signals, one is the stimulus the other is the recorded voltage membrane voltage and here I have it displayed I can zoom it for example if I want to zoom on the single action protection and so on so forth in data the corresponding data for the paper is not stored in the colap it can be downloadable it is not stored in the storage of the colap but it is stored in public containers we call them containers which are public so available to everyone so you will not have access to the entire container but of course you can download the data here because they are free it depends on how the data are stored I think in this case they are raw data so you do not have the metadata but we are providing for example for the models in the brain simulation platform here in the so here we have a model catalogue so we are collecting all the models that we are using model catalogue here you have the list of all the models and there you have the description then of course you also have the option to write the people because sometimes the metadata are elsewhere and then I would like also to side that the human brain project provides an interface to all the data that is being built that is called knowledge graph so if you go to you can start to explore all the data there so most of the data are already in the knowledge graph putting data into the knowledge graph goes through curation processes so you analyze the data you see whether they are fine or not you may have there some data that you cannot find elsewhere or the other way around so for the models here you have all the informations for the light papers to write to people and for the knowledge graph basically you have all the information all together ok, I think it includes source code in this in this example we have a reference to modelDb which is a repository for models it is called modelDb it is not part of the human brain project but it is a freely accessible platform so you can just put your model there where you are referring to the model or to do the optimization code that reproduces the finding not of the model the model that reproduces the experimental data let's say that ok, so this is available in modelDb ok, so all these papers are models on data in principle yes by now all journals are accepting models the authors must provide a link to where to download the code to re-reduce the model I mean if some paper is not about the modeling but sounds like useful about source code it's not it will not be interesting in this list right but this list is the light table is something in which you have a simulation course because it is a brain simulation platform so you are not going to see in this list purely experimental for me as a developer model there is a thing as a model when a source code to run this model to set up this model to save the output of this model where on these steps say if I download the model how to run my laptop but these are two different issues if you want to run the model without installing anything you use the platform because it is possible to click and run the model and you can also do some experiments if you want to run the model on your laptop this is outside the platform in that case you can do whatever it is needed from a specific model this is not part of this course you see the difference do you rate that code and the model and the data before public no, of course this is normal practice anyway we are putting link into light papers to all the tools that we are using for example with this specific paper we have the models that we run but we also use this package which is loop-a-yopt so there is a link there and it is freely accessible on github in the second point he wants to download the model then play it around with it not necessarily download on my laptop I want to play it maybe in your environment it is just strange for me that all things are packed in what you call model because for me it is just a bunch of software that is structured around the simulation and you call it like a model like a container that I can modify no, you can yes, you cannot modify the model here if you want to change parameters simulation time yes, you can do this with the use cases yes so for example this is the model that we used with this to this use case that we are going to see this afternoon which is called Neon as a Service so this specific simulation you are going to see this in practice later and in this case you can change the parameters but in principle you should be able to do with the things that we used whatever you want so everything is there with this use case for example you can select the recording the simulation time the altitude of the current the location of the current the recording you cannot change the conductances of the properties of the neurons because those are part of the paper that you would see this in motion yes, later ok, so I think I think I have ended my time please do not hesitate to ask questions during the day long and again refer to the contacts for any questions that you may have ok, thank you nothing Michele the adapter is my display board it is better yes, you can have it in the front or outside even exactly like that and then you know you just see it is small this way is it ok? perfect do you want to try it? yes it is very beautiful they have all adapter I don't know a water bottle a water bottle if I can now we will see a water bottle if there is a glass ok ok we are in the guest do you want to see it? wait and touch something of Carmen wait until I send you the file Luca do you want to see it? but you should always find it for Carmen because Carmen I try ok ok ok ok ok ok ok ok ok ok ok ok Carmen Carmen ok ok um, um um um um um um um um um um saved the data in we have to take into account when we manage experimental data. And in particular I will focus my attention on the electro physiological feature extraction and later I will show you practically how to use the brain simulation platform in vzaljena vnešljah, kaj je tudi tudi izgovoril na trusnički tudi, včežil, da je tudi izgovoril na eksperimentalni trusnički. OK, in v teba del, izgovoril je učil, skupaj nekaj, na sebe, nekaj bo modell včežila z všim, možne modell nekaj, Zdaj zelo se zelo se zelo se očinila terminoloži, bo, da je poslil, da je karakterizala, vzelo z vsej mročnih in molekuljih in elektrofisiologijnih pojedev, zelo se očinila zvršenje. Zato, da se zelo se zelo sezelo, in zelo sezelo, da je zelo sezelo, zelo sezelo, da je zelo sezelo. Ok, zelo sezelo, da je poslil, kako je poslice in objevajte modeli nekaj zvršen, zvršen in dobro. V prospodnih, zazdaj smo vse spesivne in vse predpravno vse je poslice in je poslice, da se vse pričoče vsega vsega vzvega vsega vsega vsega vsega vsega vsega vsega vsega vsega vsega vsega in je poslice in je poslice, da se vsega vsega vsega That is, to is much more logical, that is much more possible, to able to reproduce our target. Meanwhile, we may open this black box, here you see an example of the cortex, and it is possible to construct a unified model by taking into account all the experimental data. All the experimental data at different levels of integration. It is worth studying so, some studies of different levels zelo izgleda spljeg, zelo spremlja, tudi v tega zelo izgleda, od spremljenej pristili. Tepik, je bilo začočnike, More the lead in also with a small number of cell types. The aim is to carry out experiment that are and results that are comparable across species. In particular, how it is possible to construct, to optimize s cell belonging for example to the hippocampal area. V traste, lepo, počkaj do bošek! Počkaj do boše izmah vse so počkaj do morašotnikov. Saz je oda tudi tudi o tom, da priča stitchno v obašotnih doboščenckih, sa vse morašotnimi morašotnikom na profizikom odmah je nekaj. To prejo imamo, da do那個 morašotnik za sejkanaj do morašotnih. ion channels with different kinetic properties and there is the possibility to study these biophysical properties by your own, but there are a lot of data that are available in the literature, there are two possibilities, for example, there is the possibility to download here the kinetic of the ion channel from the channelpedia, or, for example, from the model DB website. As a third step, we need to characterize our experimental traces, we need to extract the feature. Of course, there are different stimulation protocols, typically we work with current injection and when we have the morphologist, the ion channels and the electrophysiological constraint, it is possible by exploiting the blue-blaine python optimization library to optimize the parameters of the model. Typically when we work with the cell, we optimize the conductancies of the ion channels, but not only the conductancies, we optimize, for example, also the passive properties of the cell. As an output, there is the possibility to reproduce the experimental traces. Here I show you an example of the correspondence between the experimental data and the data that we obtain after the process of optimization. Typical experimental data are somatic traces that are obtained by performing recordings at the summer. The input corresponds to step-current stimulation. Typically for this set of data we work in the range between minus one to one ampere and the stimulation protocol, the duration of the stimulation protocol in this case is over 400 milliseconds. Here there is the possibility to have a look at the experimental traces locally. For example, using clumpfit, but as we will see later, there is the possibility to visualize the experimental trace also within the brain simulation platform. Ok, what about the experimental data? Here I show you some data from the Thomson lab at the UCL. As you see here, we have a different step of current, 0, 6, non-ampere, 0, 8, 1, non-ampere. As you can see, there is a great variety of experimental data. There are many differences, not only in terms of the number of the spikes that you can see here for a fixed input current, but for example here you see the interval between two consecutive spikes is more or less the same. Here there is something, a behavior that is something like a Morse code behavior, a certain behavior. For example, here there is an increase of the inter-spike interval and so on. So it is clear that it is necessary to characterize this data in order to have a criterion. So, the same for the morphologies. As you can see here, there are different morphologies with different soma shapes, different dendritic arborization priorities. So, how it is possible to take into account this morphological, electrophysiological diversity of neurons? It is commonly accepted, the so-called Petilla terminology. At the very beginning this nomenclature was used for GABAergic interneurons of cerebral cortex, but we used this classification also for neurons belonging to the hippocampal area. And essentially there are three types of characteristics, morphological characteristics, molecular characteristics and physiological characteristics. And I will show you some morphological characteristics. For example, it is possible to distinguish between pyramidal neurons and interneurons. The matrix is different. There are some characteristics that are clear, like the tipper. As you can see here, we go from low values to high values, but there are other characteristics that require a little bit more of knowledge from a mathematical point of view. Here I show you the characteristics related to the electrophysiological characteristics related to the firing patterns. And we will use these characteristics during the end zone session. As you can see here, the firing behavior of the cells can be characterized according to this nomenclature. There is a bursting behavior, a continuous behavior, a delayed behavior. And here I wish to focus the attention between the non-adacting behavior and the adapting behavior. As you can see here, in the non-adacting behavior, there is an increase. There is not an increase of the inter-spike interval. Here instead you see that the time between two consecutive spikes increases. So, this diversity from morphological and electrophysiological point of view is characterized by neurons belonging to the neocortical regions, but also to the hippocampal region. And I wish to highlight that the same cell can exhibit a diversity of firing patterns. So, it is possible to combine together the morphological types with the electrophysiological types. For example, in the case of this cell we have the so-called continuous accommodating behavior, bursting non-accomodating behavior, bursting accommodating behavior, and so on. And the same for the hippocampal neurons. You see here there is an exemplar neuron that exhibits this kind of behavior, bursting continuous accommodating and continuous non-accomodating. It is worth noting that there are also physiological features that can be taken into account, for example, the so-called dendritic back propagation. These data are taken from our paper appeared one year ago, and it is worth mentioning that when we construct an optimizer model of the cells belonging to the hippocampal area, there is the possibility to obtain this behavior without imposing these features from the very beginning. I mean that we obtain the dendritic back propagation by simply constructing our optimizer model. Now, I focused the attention on the electrophysiological feature extraction library. This is an open source package. It is possible to automatically extract the features from time series data recorded from neurons. The output is a file, a file JSON in which for each feature is reported the mean value and the standard deviation. Few details about the code. The code is written in C++ and the source code is public. Here, there is the link where it is located, at which it is possible to have a look at the code. So, which kind of feature we have to take into account from an electrophysiological point of view? We work with spike element features, voltage features, spike shape features. A few words about the spike features, spike element features. You see here, typically we work with the so-called inverse, first easy, second easy, third easy. We work also with spike count, that is the number of spikes in the trace. The inter-spike interval is calculated in this way. We have this vector where are stored all the data. If the length of the vector is larger than one, then we are able to calculate the first easy. Otherwise, the output of the first easy will be zero. And we proceed so on. If the length of the vector is larger than one, it is possible to calculate the second easy. Otherwise, the value is zero. Other features that it is worth to take into account are the voltage features. I wish to mention the voltage base, this red line that is the average voltage during the last 10% of the time before the stimulus. The steady state voltage that is this pink line, the average voltage after the stimulus. The voltage deflection begin, that is the height of this interval. And the voltage deflection that we calculated at the end of the steady voltage. There are also spike shape features that take into account the shape of the spikes. First of all, the amplitude, the AHP depth. That is the relative voltage values at the first step of hyperpolarization. There is also the AP duration of width and so on. So what we obtain is output at the end of the feature extraction. Essentially, as I said, I obtain a file like this in which for each feature we have the mean value and the corresponding standard deviation. And there is also the possibility to characterize these features by changing the injected current. And here I report you the spike count, the voltage base, the inverse first, inter-spike interval, and so on. And later, during the end-zone session, I will show you how to extract automatically these kind of features by using the online use case trace analysis. Ok, that's it. What we understand is, you take a spike data between expressions as an example. But maybe you get the spike information. You do the spike sorting or spike detection for us, too. This spike. Because I see a lot of you were talking about spikes. Yes. So how they detect or sort it. The threshold for which we... Is minus 20, yes. Minus 20. But... I think it might... There are different experiments. Ok, but it's also a lot of data from points of events in that course. But the data is very... So this would be, as you collected the raw data... You record... Yes, from the summer. From the summer. When you record the voltage as a function of time. Yes. And you get... There is a threshold. Yes, ok. Ok. So we define automatically from the software. We define the spike. Yes. As every side... You reach... Minus... Minus right. Yes. Ok, good. Yes. Yes. Yes. But I mean, in our case we use a commercial software. Yes. Yes. But in spike sorting algorithms, you are recording from an extra cellular electron. So you are getting... Not these kind of traces but just some very noisy traces with very small spikes. Yes. And you don't know where they are. Ok. This is a completely different experimental protocol. Ok. in zelo, da je to počutno. Ok. The experimental protocols of square wave injection, like a lot of those, it's just from the history of physiology, people always in square wave injections with the way of characterizing their own. Have you guys thought about or we've identified whether there are other kinds of experiment to do and feature extraction that would be better at characterizing or optimizing their own model as a smaller number of units of experimental time. So imagine you have a limited budget of each cell that you're recording from of 5 minutes or 10 minutes to collect as much data as you can and you want to identify the parameters in the model that that cell corresponds to as the smallest variance that you can. It seems unlikely that the very best thing that we could do is because that's what people always did, but it seems like there might be something else. Have you guys ever looked into that problem? No, but it is a very good question and my answer is that if you really are on a very small budget what you want to do is just spike out spike times, that's it. I mean, you cannot go and do a ramp, for example, or things like that because you're going to get so much variability from cell to cell that is not going to be even useful in that sort of situation. Well, just for example, let's say you give up a chirp stimulants and a lot of frequency complaints and so you get information. This is even worse. I mean, but do you think square wave is optimal or cost-optimal? No. The zip is an increasing frequency during time. And you take a lot of time to do this. The recordings are about 1 minute because you go from low frequency to very high frequency. And with 1 minute you don't know what the cell is going through. As a matter of fact, the sensitive of experimental data that you see before, only with, I think it is 1 second. No, less, less, 400 of milliseconds. You see a drift in the resting potential during the spell. So I will stay, if you are looking for a very quick and simple and relatively safe first approximation I will go with spike out, just like that. But if you have better ideas then that's fine. Not yet. Another question. So, first, second, third, ISI and so on, right? So there are maybe some statistics that could be downstream of those. What is the coefficient of variation of the ISI? Or what is a function of a fitted decay constant of ISI? Yes. I know some of those are in the feature traction, but some are not. In terms of getting model fits that match not necessarily the exact sequence of ISIs, but the dynamical type. That would be the most valuable. It depends on the kind of traces and the neurons that you are analyzing. We tested those features too in the preliminary phases of our optimization. They were not reliable. And we ended up with spike times and spike count, which the spike count will give you the average frequency. The spike times will give you implicitly also all the patterns. And by spike times you mean ISIs, right? ISI. The sequence is too big, you don't get that error. Is it okay to take this off? Like this. And back again. This time to give you an idea of how we do single cell models from the scientific point of view and why. Because I'm sure that when you sign up for this course you say, wow, brain simulation platform, so after that I'm going to simulate the entire brain. Of course this is not true. It is not possible because of several problems. One problem is of course that we don't have the technology at the level of computer technology to run a simulation of the whole brain not even detail models using all the morphologies but also point neural models. We don't have a model that is close enough to the experiments to model everything. But also because we don't have most of the data that we are going to need to implement this. Okay? So why should we focus on a single neural? Of course this depends on the questions that we want to ask because in the biochemical pathways of synaptic plasticity of synaptic transmission we don't need a network. We don't need a neural, we need just the set of equations defining our kinetics for our synaptic transmission and work on that. On the other side if we want to model behavior of course we need at least a brain area connected with that behavior. So what we can do is probably we should be interested in doing a single cell model. Well that may give you a couple of examples but first of all for those of you who don't know how to do a model what we use to the tool that we use to implement a single cell model is neural. Neuron is a public open source code that runs simulations. Of any kind of neurons. Basically it is high efficient engines solving numerically solving a large set of ordinary differential equations. So basically what we do is to implement a neuron is to start from the morphology and split the membrane in very small pieces and implement each piece as a set of ordinary differential equations and add any properties that specific neuron has in the real system. And then we put everything together as an electrical equivalent system. But of course we need to take into account the active properties of neuron. You know that the spike, the action potential is the end result of interaction of ion channels that are spread all over the neuron membrane surface that open and close according to different timing and different kinetics that generates this 100 millivolt signal that is called action potential. Ok. So how many of you are familiar with the oxygen axial equations? Ok. So you answer that. Ok. So this will make my life much easier in this talk. These are kind of the existence of ion channels. In this case it is a tunnel microscopic picture of an MDA channel which is open or close. This is the surface of neuron. This is a model of the crystal of the ion channel. You see the hole, the proteins that form the channels between the inside and the membrane makes the possibility for ion channels to go in and out from the cell. This is a reformulation of the logic action equation. We need these equations with the parameters given by experimentalists in such a way to implement different types of ion channels in the membrane in such a way that we can put these equations in each compartment in which we think it is or it has been shown to be a specific given ion channel. One example in which we can see the usefulness of, well if you stop their conditioning I am going to sweat like crazy. Ok, good. Ok, thank you. One example in which we can see how useful it is to implement a single neuron model is for Alzheimer. So there are a set of experimental data telling us that in vitro in animals, because of course you know that in many cases we have animal models of diseases epilepsy, Alzheimer and things like that. So in this case there were in vitro experiment telling us that during Alzheimer disease model does an over accumulation of beta miloids on the surface of the membrane there is a 40% reduction in the potassium KDR channels 60% in another type of potassium channels, the KAS 50% reduction. So basically all the channels are going to be reduced because the plugs are going to stop the function of the membrane however they don't do it uniformly so we can say ok, and each of this paper was showing the effect limited to a single ion channel. No one was checking what happens if you put everything together. So we set up a model in which we have a neuron, a CO1 neuron in this case, we spread up with a bunch of synapses and we modeled Alzheimer by just implementing those changes randomly on different pieces of membrane taking out 30% of membrane or 90% of membrane to model the progression of the disease. And once we do that we can implement Alzheimer and now this is now live simulation let me arrange everything so you can see what's going on here. Ok, so we have a neuron which is doing his stuff random background synaptic activity you will see that is going to be some activity here and there this is the space plot which is telling me what it is going on along the path along this dendrite and you see there is background activity occasionally there is a spike in the dendrite that goes to the soma so now what happens during Alzheimer during Alzheimer, yes no there is no stimulation here these are the synapses which are activated randomly no yes, uniform distribution random distribution of 50 synapses in the neuron so there is no input external input yes this is just an example just to show what happens if we apply all the changes that have been shown for the for ion channels experimentally during Alzheimer if we do this we see that the neuron as expected will reduce its firing because you reduce the channels the plugs are going to to interfere with the normal generation of action potential and basically stop firing but once we have a model we can also try to figure out a way to rescue to the normal conditions by apply some pharmacological manipulation to some of the currents and when we do this and we did it with the Ka so simulating Ka treatment the neuron goes back to the normal conditions so in this way we can study the effect of Alzheimer but also test possible treatments and things like that this is one example in which a single neuron may be single neuron simulation may be important because this can run on my laptop a network if I want to model cognitive function cannot run on my laptop I need much larger allocation the other case is for field potentials external field so you have been following probably a problem issue of figuring out if the external electrical noise in the environment is a problem for the brain or not and it has not been clearly decided yet there are no clearly experimental data showing that there is an effect and I'm talking about power lines power frequency means 50, 60 hertz and not going around or talking about the cell phones because the cell phones are using a frequency that is too high to enter the brain it cannot pass the skull but the 50 hertz it goes through and it happens to be that the 50 hertz is the same of the 20 millisecond cycle is the same of the membrane time constant of hippocampal neurons so the hippocampal neurons are going to resonate to the 50 hertz so it is important to study what is the effect of the single neural level as you can see here even if the European Union recommends 10 kW per meter as the limit of exposure to the low frequency electromagnetic fields close to power pillars this is okay but if you go also for electrical appliances in induction hob if you look at the master chef for all these food contests in which they use induction hob these things have 5 times higher emission with respect to what is suggested by EU as a limit of course I don't expect that the guy goes there he does that because those are odd spots but still it is 5 times higher the full processor is the 40,000 the planer 24, the energy again you don't go too close to those things but there are electrical noises at the power lines frequency which can sum up in some cases so what is the effect on the neural we can do a simulation I will explain you the plot here but let me run the simulation so in this simulation we have the extracellular field the direction of the field that in this case is a planer field that goes from the top to the to the bottom of the neuron this is a C1 neuron and this is the somatic activity let me extend this here and let's run again the simulation this is going to have the neuron going sub threshold activity for about 100 millisecond with no spikes and it is flowing out around and at 100 milliseconds I turn on the field I think it is 5 kV per meter and you see how the neuron is going to reply to the external field there it is ok, yes how do you set up this? I do implement the simulation you mean what do you mean the setup yes ok the idea is that whenever I can model with an equation or a set of equations I can plug these equations into the additional equations that are used to make up the neuron so I start with the neuron under physiological conditions with all the equations for ion channels I mean the sodium the different type of potassium channels so I have a neuron which is reproducing more or less the experimental features then suppose that I want to model Alzheimer I just apply the changes that the experiments tell me as a result of Alzheimer so reduction in the copic conductances in this case for the electrical field it is just one equation because the field is the perturbation of the membrane potential ok, but since the field is depending on the distance every compartment in the neuron is different in terms of effect of the external field ok and so the equation is going to be with a contribute to a different amount throughout the dendrite according to where the field is infact if I rotate the field in this direction for example the effect on the neuron is completely different so the idea is that as long as you have the equation that describes the features that you want to model you can put that question well if you yes, we are going back to the original question you need to know what you are doing ok so this is the reason for writing a paper ok, so if you have a question that you want to explore scientifically you need to have the background to know how to implement or how to make hypothesis and then make a real constructions in terms of equation that you want to test in the model so for example if you want to model I don't know what is the problem that you have in mind no problem yet? ok, so I can give you plenty of problems ok, so if you want to model epilepsy, you need to make some hypothesis based on experimental data if the experimental data or for example a doctor 7 years ago came to me and said ok, look I have a baby which has neonatal febrial seizures the epileptic of the babies you know the temperature raises and they go on the combustion and he found that baby had a channel mutation in the CA1 region KM, one of the potassium channels and they told me if I give you the kinetics of the control conditions and the mutated channels can you implement the model? I said yes, give me the data and I implement the model so we did the model and we found that if you change the kinetic of that specific channel it becomes much more excitable and so this is the reason for having more easily seizures rather than not related directly to a mutation of the KM channels but for each particular problem you need to hypothesize something or have some specific data that tells you how to modify the equation that you are implementing ok, so in this case I mean you see that if you rotate the electric field the neuron is going to spike much less it means that the direction of the field with respect to the neuron principle axis is going to affect the behavior of the neuron so if the neuron is perfectly aligned with the field you are going to be affected otherwise you don't sorry, dependency on the frequency because you mentioned the resonance principle so have you also sweep the frequency and have you observed this phenomenon? well they are going to be reduced but in this paper we were interested only in the 50 hertz because it is the power line but if we change the frequency of the field it's going to be also some effect at the level of the neuron it's going to be less and less affected because high frequency are going to not generate much depolarization because the membrane is so slow that it can basically dampen out all the high frequency oscillations ok so it will change yes so basically what you are seeing here is that what you are suggesting from the model is that since it strongly depends on the alignment between the field and the principle axis of the neuron and the hippocampus in 3D very few neurons would be affected by this at any given time ok because if you move your head you go around you are not going to affect a lot of neurons ok however if you live close to a power pillar or close to a large generator of electric fields and you study something you memorize something you are memorizing something using a set of spikes because you know of the spike timings is important to to potentiate or depress the synapses if we run the same simulation as I mean these are raster plots I assume that you all know what a raster plot is so you have this series of spikes we are replaying CA3 input on CA1 neuron so we got experimental data recording from CA3 which is the region giving most of the input to CA1 and we play this for different trials on our neuron and then we apply the electrical field and these are some differences in general you don't see much but in many cases you see that the electrical field is going to generate more spikes or to change a little bit the spike that is already there so in this case for example without field you get only one spike with the field you get in this during one second three spikes and so on it means that if you learn something with a given pattern then under electrical field conditions and then you move away you change home in a very bukolic arrangement you are going probably not recognize well not recalling well something that you learn under the field because your spike sequence is different only in very specific cases I'm not saying that this is widespread but there may be cases in which it is important so that said how we do it as Rosanna has told you we start from the traces in this case those are CA3 neuron that is feeding input to the CA1 it doesn't matter from this point of view because the workflow is always the same we start from the traces in this case we have a CA3 neuron this is the morphology reconstruction these are some pictures the experimentals gave us and these are different traces showing that the same population on neuron CA3B neuron under the same input current can generate different patterns completely different patterns those are recordings from not the same but different neurons in the same population ok and you can have a non-adapting or weakly adapting delayed, bursting, adapting so how we are going to model this and why so we went on and say ok let's start from CA1 neurons I'm going to be a little bit delayed for this ok let's start from the set of CA1 neurons because this was a new neuron type for us CA3 instead of CA1 amiglal rather than cortex rather than basal ganglia in this case CA3 instead of CA1 they are different in their morphology in their electrophysiological features but we don't have any channel kinetics specific for the CA3 the experimentalist every time I ask for more specific who is doing experiments here electrophysiologism every time I ask one of you guys to give me the kinetics or specific neuron you say no I'm not going to do this because you cannot publish on nature or science these data are very fundamental data but they are not fashionable from this point of view so we don't have CA3 channels so we start from CA1 channels ok sodium KDR and KAA these are the basic channels that you see both observing CA1 so we start from those and we are trying to run the simulation we are using only those channels to check what's going on with our model ok so let's see these are typical experimental data the one that you saw before and we want to model this let's start from this which is the simplest one why it is the simplest one because it is a simple chain of action potential with very little difference between one spike and the other just a little bit of increase in the timing ok so from our point of view is the simplest one those with the bars these are the most difficult because as you can imagine this is a very highly involved dynamics going on between all channels but this is the simplest one so we thought ok we can go ahead with the channels from CA1 and this is the result we cannot go doing manually not optimization as you see later since I know what I am doing I cannot change manually following my instinct the different conductances and I cannot go better than this of course this is wrong because the threshold for the action potential is too low but this is so in the experiments the threshold for action potential for CA1 neurons is much lower than the one from CA3B which is everything shifted up and there is no way that I can change the conductance here in the threshold of minus 30 ok I am not going into details of why I cannot do this but basically if I need to reach this threshold the sodium channel inactivates so I cannot get an action potential so the hypothesis here is that the channels in CA3 are shifted by 25 millivolts all at once and in fact if I shift all the channels by 25 millivolts you see that now I have this trace here which is much closer let me delete that previous one yes both cases in this case I did it manually because I have an intuition of how the different channels works but if you know the optimization program needs your input in terms of change so if you say in my opinion the shift is important so you put the shift as a parameter in your optimization and the optimizing program is going to check also for different shifts but if you don't do it from the beginning the program is not going to suggest you this so this is important a very important point yes but the shift in the kinetics which means also the voltage at which the sodium channels become explosive which is different in CA3 and CA1 according to the experiments ok, so let's go do the to the more complicated stuff adaptation in this case it is clear that something is going on because I have this spike then the next spike very close then a lot to delay then even more delay so I cannot go ahead if I look at the kinetics if I do look at the kinetics you see that the time constant for the kinetics is around 10 millisecond, 30 millisecond 12 milliseconds it is this red the axis here so all the channels in dynamics which is much faster than the delay that I am seeing here you realize that so after at most 30 milliseconds the channels are back to their resting state original conditions when they got here so there is no way in which there is this kind of additional delay there must be what is the hypothesis the hypothesis is that there is a mechanism going on there which is affecting the inter-spike interval which means in my world I need a ion channel kinetic with a much larger time constant and this is done with the so-called M channel the muscarinic channel potassium channel which has at the physiological conditions almost 200 milliseconds so this can help because what happens is if you start from resting potential and you have an action potential which brings the membrane potential to here to plus 30 you go from here almost instantaneously to here all the channels kinetics are trying to follow this sudden change in the membrane potential but they can't sodium and potassium KDA KDR can make some step toward the new condition at 30 millivolts but the KM is too slow so it just changes its activation time its activation range from here to here because in one millisecond the action potential goes here and then here in this one millisecond time the ion channels its activation curve with a very slow change it means that after the first action potential it goes back the KM channel is going not to have enough time to go back to its original activation state it will stay open a little bit and this means since it is a potassium channel that you have an additional driving force minus 80 which is going to delay the spike another spike is going to activate a little bit more so you have this kind of memory effect which is able to generate the delay that you see in the spikes so let's run the simulation again and now we have the adaptation and the adaptation of the KM so if I adjust the conductance the best I can the best I can do no, strong adaptation this is another simulation let me delete this strong adaptation with KM is this the best I can get is this it tells me that KM is not enough because I am missing this very long delay of course there is another mechanism which is on top of the KM which is able to even delay more than the spike yes yes no, no, no this I've done manually in principle it could be some but my intuition was that I'm much better than any optimization so far after 40 years of this I know much better than any program so this is the best I can do there is another channel that is already already present in CA3 but in all neurons which are calcium channels and whenever there is calcium channels there are also calcium dependent potassium channels so if I add calcium dependent potassium channels and calcium channels let me delete this one additional thing if I add only calcium dependent potassium channels the best I can get is this and this is of course what is going on here you have actual potential some calcium entry some potassium dependent calcium that reply open a little bit calcium entry and you see that you begin to see the delay but when the delay is about 100 milliseconds which is of the order of the calcium pump extrusion so the calcium enters the cell but then is push it out of the cell by the pumping mechanism so the calcium decays and then you have a spike so this cannot be the reason for having this long delays but if I put both back together KM and calcium dependent potassium channels I can get to this point so not to this point of course for adaptation visitor effect ok, let me say this is crazy but it is almost ok look at these spikes 100, almost 200, 400 now I am going to raise this and do this you see that now I am very close which means that I need both and the same with the bursting with the bursting I don't know what's going on here that's of course something that is very I can also get a very close a bursting very close to the experiments ok, by manipulating just the conductances and of course after shifting by 24 mV also the channels so at this point what we did was ok we have all these conductances we know that there are millions of combinations of conductances so I have just tested one that was ok for our proposed but I am almost sure that if I manipulate a little bit of the conductances I can find many cases in which I have a set of conductances which gives me a good reproduction of the experimental data so we went through and at this point I need a small supercomputer a small allocation of supercomputer and did a simulation in which all possible current injection with all possible combinations of conductances and classify automatically the result in such a way that we can figure out what was the best combination of conductances for having a burst and adapting traces and so on and so you can see here the different things going on, the classification algorithm that says this is a burst, this is not of course some traces are going to be ugly but don't tell me you experimentalist that you get ugly traces in your experiment ok, so the model is doing the same so let me stop here just to see so for example yes calcium, yes the concentration well this is a very good point based on what is known the inside, the outside concentration of ions the resting the reversal potential for sodium and potassium are kept fixed the calcium is not the calcium depends on the calcium input so we have the Goldman Katz equation for the reversal potential of calcium in these simulations, in these files that they are using, but not for sodium and potassium and so you see there are traces I want to check one for ugly trace for example, this was defined as let's see non-adapting, yeah so now we have a set of conductances and what is the result we can put everything together and analyze the results to see what are the conductances most affecting each different condition ok, delayed, of course if you have a delayed trace which means that a normal behavior is a delay, an initial delay you need the so-called de-current and this is of course important because the de-current it has been shown that is another potassium current with features, kinetic features that are needed for this kind of behavior adapting the adapting cells needs to have both Km and calcium-dependent potassium channels but in 95% of the cases the calcium-dependent potassium channels was higher than the Km the opposite for bursting ok, so we actually can, with this model we can predict the channel distribution or expression in any Ca3 b neural by looking at the trace if a trace is a bursting then the Km the calcium-dependent potassium channel must be lower than Km they need both for this kind of behavior because you see that neither there were no simulations in which we can reproduce this data with any of those two ok ok, so just quickly going through this this effect in which you have many possible different sets of conductances reproducing equally well a set of experimental traces it is a very well known effect in biologies called the generacy and this is important because a neuron cannot if you record from different neurons even if they have very similar traces they are not going to have the same conductances ok, because conductances are dynamically changed with the neuron activity so for example phosphorylation processes are going to change the density of channels you tell me that there are a lot of biochemical activity that are going to change the density of channels in any given neuron based on their past activity ok, so outcome that you have so many different combinations that works and why basically the reason is that the system overall must be robust enough to adapt to any changing condition so we tested this for c1 neuron and in this case we used the tools that you are going to see later in the talks so we start from a bunch of morphologies for interneurons and pyramidal neurons of course from a good experimentalist which gave us three dimensional reconstructions for different morphologies and of course we start also from traces some of which you saw with Rosanna traces from mean 50 almost 1500 experimental traces and we extract from them 107 physiological features we are going to see later the use case that we use to extract these features automatically from a bunch of traces with those features we implement the model because we have the morphology we have the channel kinetics from my previous c1 models we have the feature instruction so we have all the experimental data that we need to implement and run automatically the optimization and you see that in many cases the experiments looks very similar to the models including the prediction yes we did not check this because we are not interested in doing reducing model my guess is that you are going to get still good results but with a completely different set of conductances yes however if you fix the conductances for one morphology and you change the morphology you have the conductances the morphology you say I am going to check this exactly set of conductances on another morphology this does not affect the result or affect it very little we have a figure in the paper