 Tervetuloa kaikille myös minua, että tämä 3. ANCF-kortkaisu on ilmastossa neuroinformatioita. Tämä on David ja minulla on Mari-Lena Linnä ja minulla on Tampere University of Technology in Finland, jossa olen tullut komputation on neuro science research group ja minulla on myös koordineet national neuroinformatics activities in Finland. Tämä on David Wilshaw ja minulla on 4 vuoksi ANCF-kortkaisu on otettu tai osallistunut ANCF-kortkaisu. Tämä on tullut tullut erityisesti neuroinformatioita. Tässä on tosiaan tosiaan tosiaan tosiaan tosiaan kautta, jotka ovat hyvin12 ulos hoidossa. Olen tullut tullut kiitos tämän 30 minuuttoreien tappukauden ja sitten takaisin, että sijaan tappukauden tuosaankoja neuroinformatioita. Olen tullut vaikka eten upsetan, että kenenä datan neuroinformatioiden ja se, miten me katsotaan, että ne ovat suurimmat nura-informatioita, ja sitten jotain kautta ja sitten myös applications, jotka nähdään, ovat tietysti tärkeintä tulevaisuudesta. Me olemme täällä, että ymmärtää, miten ajan toimii. Ja me voimme myös ymmärtää, että ne ovat genetik... ...genetik-molekullar, cellular mechanisms to understand how we learn, how we remember, and how we understand. And this is our call at the end. But it's not all. So there is additional thing that there are a lot of neurological diseases and disorders that are becoming more and more common in our society. And without understanding the healthy functioning of the brain, we are not going to understand how these disorders or diseases develop and how to cure them. So this is another kind of motivation for us to do neuroscience and neuroinformatics, the way, how we describe it. When we think about the brain, and when I look at it as a combined computational neuroscientist and as an electrophysiologist, I think that we actually understand quite little currently. And this is a bold saying, but I will justify it now next. So let's look at first the mammalian brain, in this case the human brain. And what we already understand quite well at this point is that we know which brain reads and gets activated when we perturb the system. When we, for example, we have gained this understanding using various imaging and electrophysiological measurements on humans. And we already understand quite well what types of cells are there, especially the neuronal types. What is their morphology? What is their biophysical properties? So this is what we are kind of really good at currently. And we understand, for example, in the case of cerebral cortex, like where are the cells, where each cells are positioned and so on. But there are many other things that we still don't understand. And one of the important thing, at least to me, is that we don't understand how cells get connected. How do these networks in the brain, how do they develop? We know that during the development we get much more connections that then get disregarded later on during the life. And one could ask that who is governing this and how does this happen. Then the second issue that we don't understand well is that there is a lot of homeostatic control and not to talk about plasticity. And how do these phenomena emerge? What are the molecular, cellular and network level mechanisms that kind of make us to learn and remember? And how is this homeostasis and plasticity possibly intermingled? The third unknown aspect to me is that we don't understand well how are the other elements in the brain, other than the neuronal elements like vessels, client cells and possibly also newborn cells, how are they involved in our cognitive phenomenon, cognitive processing, information processing, plasticity, homeostasis. So there is a lot to dig in. And the other aspect is that it's not only one brain region like the cerebellar cortex that I talked about but many other brain regions. And like if you compare the cerebellar cortex, it's a completely different structure. And in order to understand how information is processed, how this dynamic phenomena emerges in these structures, it's a challenge. And there we need something more than what we've been using so far. Let's look at a little bit back to the history. So traditionally all these questions that I was addressing, they've been studying more or less in separate fields that are listed here, like fields on molecular neuroscience, cellular neuroscience, neuroanatomy, neurophysiology, systems neuroscience, cognitive neuroscience and lately also computational and theoretical neuroscience. But we have to realize that these fields have been relatively separated and researchers may not be even talking between these different subdisciplines. And there is something we need to do in the future, especially considering the fact that we have a variety of experimental techniques ranging from subcellular resolution up to whole brain scale to make sense out of this data that we are massively producing now. So we are entering a data age and something needs to be done so that we can understand how the dynamics emerges in our brain from the structure and function that exists there. So this is sort of a very brief introduction and motivation for you to justify why we are doing this course and why we try to make you to believe that we really need to take some additional tools, not only these tools that are listed here and there are many more on this experimental side, so this is only a glimpse of them. And we at NCF consider that the answer to make sense out of this complexity that we have in the brain is neuroinformatics. And neuroinformatics is defined as integration of information across all levels and scales of nervous system from genes to behavior. So we want to start understanding not only the behavior or the dynamics at one level, but combine it. Start understanding across multiple scales. Neuroinformatics can be considered to be in the intersection between neuroscience and information science. Something a little bit background information about NCF who is behind, who is funding us and who is making available this course and allowing us to be here. So NCF was established in 2005 and its primary aim is to sort of promote establishing infrastructure that will help the acquisition and annotation of diverse datasets to achieve multi-scale and multi-omic data integration from genes to behavior again. So this is the ultimate aim. And here are some kind of like missions that have been originally proposed for NCF. So NCF is supposed to be very international, provide infrastructure support and it kind of promotes freely accessible data and analyzes resources and seamless flow of information and knowledge. So it's this kind of neutral organization that 17 countries so far have been joining. To make neuroinformatics a little bit more practical I have listed here a couple of items that I consider are important parts of neuroinformatics. So first of all, NCF develops standards and good practices for data acquisition and storage provenance, data sharing, publishing, analyzes visualization and modeling and simulation. So it's sort of like an organization that tries to bring these various views together and tries to make some sort of standard way of how to measure signals, how to take images, how to share models and so on. Then the other aspect is that it develops federated databases meaning that we would be easier, like there exists already a lot of databases all over the world but to make them more accessible maybe through some kind of like single access point. And then on top of all that, NCF promotes developing theoretical computation and simulation tools for quantitative modeling of neural functions. Let's look at next a typical situation in neuroscience research. So there could be this kind of questions like are you willing to give your data to be shared and are you willing to take the time to make it available. Are there places where you could store your data and can somebody read your data and can it be understood. So these kind of questions could come around. And the answer for a neuroscientist who is really busy trying to make the PhD thesis or get results published, the answer is normally no, I don't have time for this. Unless it's planned from the beginning on in the project. And what NCF and also other initiatives worldwide try to achieve is that we try to promote developing first of all ontology. This is the vocabulary of neuroscience data and automated methods to metadata capture and metadata is again the data that is additional to the signal or image or original raw data. And basically this means that everything would be automated. You can have something still on your notebooks if you are a wet lab person, but at the end everything would be documented in a kind of digitized manner. Then as I already pointed out there is an emphasis to develop federated database systems and develop standardized and comprehensive metadata schemes meaning that we sort of have a model for metadata not the model of the neural function but the model of your data which would be transferred through the workflow from the recording until perhaps the building of mathematical model. So this metadata would always follow the data that you are using whatever you are doing in neuroscience field. And of course NCF and other organizations globally they favor to develop methods to ensure interoperability of tools and systems. One example of this kind of federated database is NCF database that has been set up a couple of years ago. And there the idea is that you have like local, maybe in its country, local databases and then this could be kind of visualized through a single access point like NCF services or by some other means. NCF is by no means alone doing these things. There are many other large scale brain energiatives that are a little bit more like research driven. We have to remember that NCF is to support this standard development tool development and such. There are research projects like Allen Institute for Brain Science, Human Brain Project, One Mind in United States that are these kind of large scale brain energiatives. They have research questions but at the same time they also develop infrastructure and tools to do neuroinformatics research. And NCF is linked with most of these big energiatives in the way that it provides standards and tools and infrastructure overall. What are then the sub fields of neuroinformatics? When we first looked at it at the NCF training committee we had a variety of suggestions but we came up with this, pretty much with this list. So there is first of all there is neuropology which is the starting point without studying the brain experimentally we didn't know what's going on there. Then there are databases and this kind of big field from the informatics point of view. There is data analysis which is a big field as such. There is modeling both theoretical and data driven and there is simulation and computation, workflows and at the end there is also neuromorphic and neural engineering. So this is how we consider neuroinformatics. And there may be other views but this is how we take it. And this has been actually a starting point to develop also this short course. We want to give you a brief glimpse of all these fields to kind of get you a starting point to go on with more advanced courses and more advanced studies. And this is how the program came out. So as I pointed out there are talks or lectures for all these sub fields. And of course the problem there is that we cannot go into too many details with most of these things. At least I think you will get during these two days you get out like what are the challenges in this field and what is important to develop next. I will now briefly go through each of these lectures and to get you an idea what we are going to go through during these two days. So first of all there are two lectures about looking at the brain, meaning how to look at the brain from the inside and from outside. So this is basically microscopic and macroscopic imaging techniques. These talks will be given by Ignacio Arcanda Carreras here and then Jean Paptiste Paulain. And I just wanted to point out that although in the beginning I pointed out that we know very little about the brain but I meant from the dynamic point of view but if we consider that the times of Cajal the beginning of 1900 we've come quite along from that of course. Then the third lecture is about probing to brain. Data analysis and neural coding and it's going to be given by Jonathan Victor sitting over there. And the goal there is to kind of analyze the obtained signals by statistical means. So this is kind of a really huge field as such. There are even almost like the conferences on this topic and you have to understand and now we cover it in one and a half hour so it's tough but the idea is to get kind of a broad view of neuroinformatics. So we want to have this lecture also there. Once we have obtained the images and recorded the signals and analyzed them and understand something how that has to be done then there is an additional step to be taken and this is modeling the brain. Putting all this information that we know about our system whether it is some cell in the brain or a local network or some brain region. So to put that data into some framework. And at this course we are going to have two talks about modeling the brain. Kautta Einewolf will first talk about more on how to make the mathematical models of neural systems. Also covering the multiscale modeling which is kind of coming in the future. And then Mark-Olival Kevaltik is going to talk more concentrate more on how you then analyze these mathematical models by means of simulation, computation and workflows. If you consider now the brain what we typically do in wet lab work we give a stimulus no matter whether it is a cell and you poke it with an electrode or you look at some larger area. You give a stimulus, you perturb your system and then you measure a response. For those of you who come more with a biological background than what we do in computational, in mathematical and computational modeling we replace our system with some sort of equations and we try to develop models of neural systems applying mathematical modeling and computer simulation. It's important to note that a good theoretical model can always predict the future. One can make a guess what are the mechanisms behind certain phenomena and then you can test it with your model. So this is what is typically done but on top of that good models really predict some emerging phenomena that then will be tested later on by wet lab experimental work and this is the benefit of modeling. I wanted to discuss a little bit also different types of modeling already in this introductory talk and I consider there are two ways of understanding modeling so one can think about it like from the complexity of model there are simplified conceptual models that many times can be analyzed even by analytical tools so you don't need a computer for that and then there are these hugely detailed models that you need to run computer simulations in order to know what goes on there. The other way to classify or look at the models is the direction of workflow like you can think of building models starting from the bottom or from the top down and I would like to say that all these approaches that are listed here they are at this point when we don't know what is the right approach they are equally beneficial so all of them needed and the better the more you combine these the better I think but of course some people favor some approach and others another approach What I would like to point out here that if you are for example building models these kind of mean filled models where you only look at the behavior of some population of neurons for example and how these populations then interact there you can study kind of like emerging phenomena in these kind of larger areas in the brain but you don't be able to say what is the role of this specific chi on the emerging phenomena on the other hand if you select to do this type of modeling which is very bifysical modeling to take into account all ion channels receptors on the cell membrane even describe some intracellular phenomena then you are able to ask what is the role of this specific let's say specific morphological confirmation or specific ion channel on your emerging dynamic phenomena so this is kind of the difference between these approaches but I would like to point out that they are equally important I think at this point since we still don't know what is the right approach to do these things how do computational models help so I've listed here the items that I consider so I consider myself computational models of neural systems already some sort of database so although they wouldn't have predictive capability they integrate experimental data and it's easier to see the relationships of different elements in your neural system if you combine things in this kind of mathematical model they describe neural systems fit existing data they make predictions if the model is really good they also provide mechanistic understanding of the neural system and they provide hopefully in the future more ways to study diseases and how they develop and they provide principles to develop new technology in engineering once we have our data with model to system and this is typically done like I pointed out in individual research groups or projects so where do we go from here the next talk in our short course is going to be given by Mary and Martin sitting here about databases, ontologies and in general how to share data I've listed here a couple of issues that are important in considering this topic so these questions they start from the fact that how to make scientists agree first of all to share data and how to create citation system for experimental and model or simulated data and then the list goes further how to present metadata of an experiment in understandable machine readable way and how to combine multiple autonomous database systems into single federated or virtual database so these are the important questions that are within this sub-field of neural informatics and I don't think they are solved so far so there is a lot of work for those who have more like informatics and computer science background also in this sub-field of neural informatics where do we use all this data and knowledge that we gain so as I pointed out in the beginning we want to understand how the brain works but we also want to solve at least some of the commonly existing diseases or disorders that for example we humans have I've selected here two examples how we could utilize the data about the normal functioning of the brain in order to help patients in the future this is more like a future driven now so this is not kind of routine like clinical routinely this is not used in a routine manner in a clinical setting but there are interesting studies down related to electrical high frequency deep brain stimulation for example the computational models the similar type of models that for example Kauter-Ainewell and Mark-Oriwell-Kewaltik are going to present here they've been successfully used for unlearning pathological neural synchronization and synoptic couple in some certain types of Parkinson's disease patients by this group in Germany there are already publications on this then another is a little bit more science fiction still but it's been done with Ablucia so far it's a much simpler system than humans of course and it's easier to show these things there but their computational models have been used for prediction of training or stimulation protocols for impaired function of molecular networks associated with learning and memory and perhaps something like this this once we really understand what's going on in the brain could be done for those who have like impaired functions of the brain for example related to memory and these two are examples where you without any drugs do something to kind of kite the brain into right direction so we should remember that it doesn't always have to be a drug that is kind of developed and would help us but it may be also something else then application areas exist also like I pointed out in the engineering fields in neuromorphic engineering and in developing all sorts of intelligent toys, devices, autonomous cars, robots neuroprostasis perhaps even artificial brains in the future similarity humans and Gigi Liu is going to talk about neuromorphic engineering at this short course so we cover also this area after this very brief introduction to neuroinformatics I still wanted to take one issue up here and I wanted to talk about what really is required to do neuroinformatics you are all young starting your career in this field like I was some 20 years ago and I wish I had all this information available at that time but I didn't but over these years I've started to think that it would be important to emphasize for every student going to neuroscience field and now I talk this big neuroscience field where also neuroinformatics is part of that all of you should have some basic understanding of neuroscience but then on top of that you should pick up a couple of topics on this list and this is by no means exhaustive but it's a couple of examples so you should have programming skills experience with informatics experience with signal processing or image analysis or statistical data analysis and machine learning methods and perhaps some of you with complex systems theory and analysis so you should pick up one, two or three of these topics where you become a specialist in order to really do well in neuroscience and this is what I see most of the neuroscience programs at least in small countries they are still not doing this so they are not interdisciplinary and multi-disciplinary they only typically teach neuroscience and maybe some statistical analysis but that's it but it would be important if you are able to during your bachelor, master and PhD and early post-doc time if you could select a couple of these topics to become or master them it would be really helpful for the rest of your career I think and with these words I think we are going to go on and before I give the talk to Ignacio Arcanda Carreras I will point out the same as David that this is a short course we are only going to be here two days and compared to many longer summer schools it's a short time to get to know each other so we should understand that immediately starting from the first coffee break you have to start associating with each other start discussing with each other and we should interact a lot so I would imagine that since every talk is like one hour and 30 minutes so you are allowed to interact and ask questions and also during the breaks and social sessions I hope we have very good discussions one additional thing that I wanted to point out so one important aspect this year is the open forum tomorrow where we discuss the questions that some of you have sent regarding neuroinformatics and this is supposed to be a really lively session and we are supposed to discuss and all these experts should be here so you still have like two days time to think about some really crucial questions and I think this is a wonderful chance here to get opinions like what would need to be studied in the future and so on future topics so I thank you and welcome Ignacio