 Mae'r gweld y baeth mae'r anghytch yn oed-nd i gael ffr tenir mawr o'r perffindio yma yn y beth a'ch fod yw'r adnod, ac mae'n bwyf yn ni ddau o'n bwysig o'ch cwestiynau a'r ddweud o yn ystod o gyneddau hynny eu gwelod yn y teimlo'r gyllid. Felly mae'r gweithio'n, o fod mae'n gweithio'n gweithio'n gweld 29 a gennym hwnnw, a'r gweithio'n gweithio'n bwysig oherwydd gael gweithio'n gweithio, Ie dwi'n rai gweld i'r cwestiynau ar gyfer teimlo, ac rwy'n gilydd i'w ddweud. Rydyn ni'n tynnu fydd yn gwneud i'r cwmwysgol. Rydyn ni'n dweud i'r cyflwyno'r llaw. Mae'n gofio'r cyflwyno i gweithio'r cwestiynau. Mae'r cwmwysgol ar ydw i'n gweithio'r cyflwyno i'r cyflwyno. Rydyn ni'n gweithio'r cyflwyno i'r cyflwyno i'r cyflwyno i'r cyflwyno. so we're going to do that and we've reserved 15 minutes per topic and so we hope therefore to cover say four or five topics during the time, maybe more. So we've also asked various of the instructors to say a few words about some of the particular topics. So I've got a list of the topics on the, in fact all the questions on the screen here. So this is the first one that we thought we'd start off with and something that we can, Maria Lader and I can answer. So this is a general question about basically how does one find more information about neuroinformatics. Now I should say here that the course here has been organised by the INCF training committee which has been running this course for now for three or four years and we are aware that getting neuroinformatics training is difficult and really I was just going to say something about the second question, how do you recommend it, what do you recommend in order to introduce myself to neuroinformatics. We take that to mean what background does one need to progress in neuroinformatics. This has been touched upon in one or two of the talks already but I think the most important thing is that if you are say somebody, well if you have a specialist course, if there happens to be a course and you're a student and you're able to attend it on neuroinformatics that's fine but that's usually not the case and you have to rely on short courses. There are a few short courses around the world, not many, this is one of them. If you are a physicist or a computer scientist, somebody working in that so-called quantitative sciences, what I would recommend is that you need to somehow get some knowledge of neuroscience. You don't actually have to start, for example, sticking electrodes into brains but you need to understand what it is to do neuroscience, what the constraints are when you are trying to analyse data that has come from a neuroscience background because it's not easy. Modelers say, well we just need this sort of data and then we'll be able to model a particular thing but it's not as simple as that. Data is difficult to get. There are usually very important constraints why data is not available of the type that you would really like so you only get that experience by somehow getting yourself to visit neuroscience labs or look at experiments or take part in experiments or whatever so conversely if you are a somebody from the biological background you need to have some experience in quantitative methods. It doesn't mean that you need to just be able to sort of use a computer but you've got to understand how to take models or data analysis tools to apply this to your data. So basically the problem with neuroinformatics is that you have to be educated in these two areas together and that's not usually very easy so that's all I will say about this one. Moriann, do you want to say something about the first question there? I would like to also add to the second one so maybe to give like a little bit more practical things this was really great what David said but like from the practical point of view I would like to point out that if you are a biologist like what we typically do in my research group if a molecular biologist or an electrophysicist enters the group they immediately start taking some math courses although it sounds a little bit tedious but sometimes there are like some good courses in mathematics departments like more tuned to scientific computing sorry so to take courses related to math that are have some kind of like scientific computing aspect and then secondly it's always good to study a little bit physics more than at high school because there you get the kind of perspective what is it about with modeling and how it helps and then if you come from a more quantitative background like engineering physics or computer science then what I did for example 20 years ago I started reading through candle which is really tedious but I went it through in two years or so like by myself and then in addition to that I went to some international courses actually where you put your hands to work so I actually did some electrode work and I think it helps to appreciate also the those who do the wetlap part so this is kind of a practical advice. Are there some other short courses around the world? You mentioned there were a few but not many. Let me see. American people may know better than me. Woods Hole and Cold Spring Harbour did have. Yes so I would say that most of the courses that exist are like longer. There are two three week courses where you have to propose a project and it's very competitive to get into those so you would need to first if you are picking in the fields to go to these kind of courses like this ANCF short course and then there are some kind of like more local courses like what I for example been running for the past two or three years this Baltic Nordic summer school in neuroinformatics and computer science so this we have kind of made an intention that we keep it on a relatively low level so that it can be attended by both biologists and computer scientists so we kind of tune it and keep it on a level that can be attended. I should add to that there are two long courses in computational neuroscience. One is run is I think it's called the European course. This is four weeks long. This is very competitive. It's run in various centres in Europe. The other one is run by a similar nature is run in Okinawa. Okinawa where there's a big is in Japan. There's a big biological physical sciences research campus there and they run another I think it's four week course in computational neuroscience. Both of these are very competitive courses to get into. Barry wants to add something to that. At Woods Hole in the US there were two courses. One was really methods in neuroinformatics. That course disappeared about two years ago. Jonathan and I both used to go to that, be faculty for that. But the computational neuroscience course at Woods Hole which I think was the first one ever and started in 1982 or something like that is still going. It takes 30 students a year. It's four weeks and the competition to get into that is unbelievably fierce. Most of the people who get in actually have some experience already. The other thing I should add is that in particular areas of neuroinformatics INCF does part fund courses of roughly a weeks length. We have been funding one or two of these a year which are in particular topics in neuroinformatics. I must say I don't actually know many, if any, introductory neuroinformatics courses like this particular one. I just want to add regarding the European course and the Okinawa course while there is competition it shouldn't discourage anybody of you to actually try. At least to me it came across okay, I don't even bother because it's so competitive. No, try. In the last years I managed to get my students in without problems. I think I also participated a few years back as faculty at least in the European course and what I've seen is while there are many applicants many don't really take much trouble in designing their project. I guess if you really spend a few days thinking about the project you want to do then you have a very good chance to get in any of these courses and it's really worthwhile you learn a lot. Okay, the second part that deals with that part of the question. Say something about the first part. Yes, so we are aware that it would be possible or maybe it would be more feasible to make like first some introductory material as a web-based course and then maybe only after everybody has studied that material collect you to meet each other for one or two days like this and we within the training committee at INCF we've been discussing about this possibility and depending on what we decide next year I guess or what INCF decides it may be that we go on for this but unfortunately it's not available now. But INCF together with the summer schools that INCF is sort of promoting and organizing these specialized courses like what David told about so we are also producing material and sooner or later this material I think will be massively opened by INCF at least I'm preparing material from two years to be released out through INCF so that might be an additional resource to look at but of course it's never kind of like a planned course for you in such a way that it's various lectures from here and there and it's maybe not the kind of full program to follow but hopefully in the future we are also able to provide such web-based courses that would give everybody a chance to start with this field. Yes the INCF portal is being built up as a resource for people as you say and the material from these courses is will be available but is not in a nicely integrated fashion in fact one of the conditions of our funding is that the material is made available but you just get stuff not beautifully arranged it'll probably a set of lectures so you have to work you know it's not a beautifully tailored book course which obviously would be the ideal. So any other questions comments on this? And maybe you should also mention that for example at Corsera there is already like one course in computational neuroscience but that is again like relatively limited it covers like one aspect of computational neuroscience so it has very limited amount of information about like or lectures on biophysical modeling so it's more like at the level of system and cognitive level modeling but if anybody is interested in that so it might be a useful course to attend. That's the cosine that's what COS, Y, N, the annual computational neuroscience course that's run sometimes in the States sometimes in Europe is that right? Yes. Okay shall we move on to the next topic? Is that time? Let's see. Okay so this is about data issues. There were quite a few questions about data some of them overlapped and some of them were answered by Marianne in her talk and suppose maybe one could start off with the second question which really we didn't really address is why don't you know about sharing data I mean is there anything Marianne or Brian wants to say about this? Well I can also ask if there are questions about this so the second statement there my experience is that these are not widely known or used is correct when you do an analysis of a lot of the community repositories that have been established for sharing data they are by and large very under populated whether that's because people don't know or people do know but don't find them worth entering it's hard to say. When we've done an analysis we recently submitted a paper to Nature Neuroscience about this I don't think it's fair to say that neuroscientists don't share data they do they do and and they do it under either controlled circumstances where somebody asks them and they will get the the data so they enter into a relationship with them they put a lot in supplemental material so there's a lot of information that's available in supplementary it's again not all that well formatted it's not all that useful but I think the intent was to make it available we find in most data sharing initiatives that there's three categories as we'd like to say there are those who really go above and beyond to share their data and so a lot of people now are proponents of open science they will deposit their code into open code repositories they will deposit their data into data repositories there are journals like the journal comparative neurology which has now set up an imaging database to go with jcn there's giga science journals there's other journals that have data repositories nature data others are recommending you know various repositories to put your data in and so they will go the extra mile to make sure that they deposit their data in a place where it will be openly accessible there's a population that thinks it's stupid a waste of time and a waste of money and will not engage so we tend not to bother with that population there's a large population who would like to but don't really know how or what and there's a lot of people that are getting into the space but again it's not until there are better tools in my view in the laboratory for appropriately managing data and getting data managed you have to spend a lot of time and effort in most cases if you want to really make data available in a form that's useful and so in most of the discussions that I've been engaged in it always comes down to a matter of incentives if you have x amount of time and x amount of resources available to you where are you going to put it are you going to put it in getting your data whipped into shape and making it accessible or are you going to put it someplace else and without a good reward system for data ability to sort of track data a lot of these discussions just get into endless loops I think by and large the data standards and things that have been put together is I would say yes it is true that it is not easy to share data at a deep level it's trivial to take a file and stick it up on an ftp server so I mean it really is a matter of how much time and effort you want to put into it generally working through the repositories the repositories that I have seen that have been most successful are ones that have very active outreach so they employ somebody to go and contact people especially after they've published data asking for the data how many people know about neuromorfo the neuron reconstruction database I thought one of the best statements I heard was one researcher who said I finally just gave it to him so they'd stop bothering me because they were relentless and going after and requesting it but they didn't put a lot of burden on the researchers themselves they said give it to us you know give us some experimental data we'll go back and forth with you we'll take care of the curation we'll take care of the modeling is a relatively simple data type you know a tree a tree structure for a neuron they don't have the imaging data and other things so I think it's very dependent on type of data it's very dependent on the individual researcher I think the good news is if you want to share your data there is somebody who will take it and even there are general repositories now like figshare and others a lot of people put their data in there gets a doi if you type neuroscience in there's thousands of things that come up in response to it there's general repositories like dryad there's more specific repositories for neuroscience data types I think nif has about 400 of these there are other places around the world that lists six seven hundred data repositories community repositories that will take your data so hopefully you know the efforts will get more coordinated and we'll start to learn and teach people better data management practices but right now it's hard we don't have an incentive system there are some databases like ndar the national database for autism research in the united states which is run by nih and you will not get funding if you do not put your data in it I mean it's very simple model they have been reluctant funders have been reluctant to put those sorts of mandates in also the institutional repositories the libraries are getting very much into data management and digital digital um the digital enterprise as filburn calls it um a lot of libraries are having to reinvent themselves anyway as we sort of talked about they are not the sort of epicenters of the campus the way they they used to be but they realize that their own constituents are producing digital works and there needs to be a place to manage it so a lot of the research libraries in the united states at least are positioning themselves to help scientists manage their data and to put it into a place and into a structure where it's accessible so so I come from a community where we do not just collect a lot of data but have to design experiments where we're manipulating behavior and one one of the difficulties in sharing data is in having inadequate languages or means to describe the behavioral experiments very well so they get described in the papers and I would say in half of them I would defy you to try to reproduce what had been done based only on only on what was written in the method section and this is not from a lack of intent of the scientists it's just that we do a lot of things that um you know you often don't even think about um how you get your monkey in our case to work um what time did your undergraduates get up this morning before they came to the session it turns out it has frequently made a difference in the kinds of results people get in their imaging experiments and those things are not included and then you share your data someone puts a lot of time and working on it and says I can't even reproduce what you claimed you got much less do go on and do what I wanted to do with it the other problem that um occurs is that with the behavioral data the turnaround between doing the experiment and publishing is quite slow there's a lot of exploratory stuff you frequently have given a presented an abstract or two at meetings and people ask you for data and then you get in the uncomfortable position that you have a postdoctoral fellow or student for those of you have students who did all this work and is trying to write this up and someone else wants the data and you've got to see that the student or the postdoc gets credit and we've got to find a way in the system to be able to share credit more easily um right now it drives people into being less open than I think most people would like to be the inability to get credit for shared for having shared something now for me it makes no difference I'm in a stage of my career you come in and want to use some data even if I haven't published it if I can put my hands on it and that's a problem also just keeping track of data in my own lab at this point but if I can put my hands on it I'll give it to you if no postdoc was involved or the postdoc has said look I'm never going to I've gone I've got my own job I don't want to do anything with it I'm happy to give it to you and you can just put an acknowledgement saying you got the lab the data from my lab in fact I don't want to be involved because I've got enough things I'm involved in but I don't think we can underestimate at this point in time how important credit is because that's how you make your career and it's difficult it's going to be difficult for all you young people you have to develop a way to protect your own interests without being unpleasant and uncaring and unsharing people and I have no easy way to tell you to do that there will be as many adaptations to that problem over the next few years as there are people in this room and I don't want to scare you about it but I don't want to minimize how I'm pointing Do you make a short comment? So again it does come down to credit and attribution which is why NIH and others are working so hard on a data citation format whether the community accepts that again as equal to writing in articles up to the community we always judge relative contribution so it always does come back to credit and I think that that's that's evolving and that is coming on many different community fronts and will be in place soon which is why I should get your orchid idea as soon as possible so that you can start to attach it NIH and NSF are starting to allow you to list products like data sets and other things in your CV that's what's coming in the new thing so and especially in NSF and NIH is going to follow suit on that so you will be able to list the other things that you do besides write papers as part of your intellectual contribution in terms of most repositories have embargo periods where you are allowed to keep it private for however long you wish usually some places will let it go indefinitely most have a requirement that it's oh so many years before you release it so I think that there are norms and other sorts of things that are starting to come into place that will govern the rules of data and we have to develop those as a community you know when do you expose a mistake when do you not do you do it collegially all of those norms have to be developed but I think we as the scientific community can start to develop those things so again credit and attribution seems to be the thing that is is driving most of this because why is it worth your time if you're not going to get credit just one interesting thing at proposed what Barry said there was a big study of data sharing practices across different communities I don't think neuroscience is one of them but unlike what you would think older scientists were much more willing to share their data than younger scientists for exactly this reason because older scientists were perceived as being you know protected somewhat from any potential negative consequences so even though your your generation rose up with sharing everything in science and academia the tendencies are still too hard but also I want to comment this thing on the on the credit because when when do you need this credit when do you need your CV it's it's typically when you apply for a job or you apply for a grant and and I'm sort of uh sometimes on these committees who decides these things right and I when I look at sort of look at the applicants I I mean it really depends on who is on the panel and what they're looking for I know for the if I've seen applications from people I know have contributed code contributing like a sharing personality I ranked them I don't only look at at sort of the that's like the high profiles and this this I come from physics I think the biologist and the medical people a little bit crazy about all these impact factors I think it's a little bit senseless it's very unscientific the way they think about it because they know it's a lot about whether you get into this journal has a lot of work what group you are in and what is considered fashionable and who's sort of it's all this this this sociological effects so I think if you have more more more rational people on this panel and I think these are they are coming it's I I I have this I have I have the idea that good things come to people who share people and I think it's actually true also yeah I think that's but I think it's many of the people I know when competition neuroscience a laddus tex is one of them well known he's always been sharing and that sort of is a way to to promote his work people are using it and sort of I don't I mean so so it's we should try to we shouldn't be as as worried I don't know I like not to worry too much about things just try to share I don't know I never been scooped I'm more worried about people not caring about what I do actually yeah this is this famous quote I'm not sure who it is from but he said basically don't be afraid that somebody steals an idea it's difficult enough to get it across as from the discussion I realized that there are a lot of tools facilities but some researchers are ready to share as well but there is some gap so but I think the main role is to be played by the journals because we want to publish so we follow the rules if they say this is the format I need even though I don't like I agree to them so if those journals say some rules okay if you are publishing a paper along with this you need to submit this data in this format in this and the NIF or someone collaborates with the journals so anyway I'm publishing my paper I want to publish or else I don't get rating or it is my job but whereas it's not mandatory for me to share the data maybe I'm bit lazy or I'm reluctant what will happen or there is no mandatory the main thing is so that's why I don't tend to share but I want to get published so if the journals play a major role collaborating with NIF or something will that not create a change the journals do the journals do have things but I have rules like this but they're not enforced I'm you know really really eminent journals have specific rules about making your data available making your paper available etc but in terms of sharing these rules are not enforced so so that's that same problem so weak mandates have been shown over and over again not to work because many have had them for a while and what I say weak is that if there's no enforcement because there's no way to track there's no way again to determine whether somebody has complied then it's very difficult to get people to follow up communities where it's expected there's a lot of peer pressure to make sure your Jane your data is available like your protein structure will be in the PDB and those sorts of things so generally it's been successful where there's a strong advocate and when there's a mechanism to track compliance then it'll happen now there are some trends that are happening now so recently also though if you haven't didn't follow it the plus computational biology saying that you're going to share your data it was a lot of hostility that came out of that if you google plus fail you'll see some very interesting discussions about what people think about being made to share their data but there are data papers and data journals that are coming out where again as part of the publication process you are expected to make your data available they also are accepting these data papers which are just descriptions of the data set so there's no analysis necessarily there's no conclusions it's just a deep characterization of the data and the data are expected to be put into an approved repository so it's a good idea but this is one of the reasons again why we're working so hard on a system of tracking data because once you have that then when they say I publish my paper they have to provide a number that says I have in fact put my data in here and it is it is instantaneous for someone to check whether they have done so but let me tell you one very funny anecdote there was a data sharing project by the US National Science Foundation where as a condition of funding you had to put your data in a repository and one of the women who ran the repository told me of one researcher who put his data in in compliance and then promptly took it right back out again and wouldn't leave it there and NSF didn't do a thing about it so it's a good idea but these things still need an incentive system I think more than stick so yeah of building your relationship that you know you have you go to somebody and you ask them please could I you know have I'm interested in your data I wonder whether you will share it with me they might say well why and so I mean maybe there's some benefit for them as well as you not your you're not just a parasite you know you're you're there's some reason why you want to do this and then maybe you can develop a relationship develop your relationship in my way is the best way forward and I have I had a great relationship with the guy because you now came from northwestern meet his data immediately and in fact the paper I published in the general neuroscience recently actually contradicts his findings and his own findings in some ways but he's really happy for me to do that so so that's that's a great test of somebody who's who is willing to enable through developing a relationship with him that was successful so I now the data sharing issue and it's time to move to the third set of questions okay this is a slightly different set of questions which the questions is which has to do with dynamics of circuitry dynamical aspects in various ways we talked about structural stuff and physiological relationships of structure but now it's a question to dynamics and I put my name up there because I wanted to mention one particular thing should we leave like one minute for everybody to read it through so one minute and you read it through good idea okay am I should I use this microphone or the other microphone I'm okay okay right okay so um okay I just wanted to mention one particular point which was to do with the dynamics of neural circuitry and I want to mention that one thing we haven't talked about in great deal here are the clinical aspects of neuroinformatics and there's one particular area which I know certainly in the UK is very popular amongst experimental neuroscientists and also amongst the modeling community and that is trying to understand the source of Parkinsonism which involves oscillatory activity within the structure of the brain that plays a ganglia and so there's a whole lot of scope there for understanding firstly the actual the circuitry and also the responses but also to understand develop models for why this particular set of structures suddenly goes off into a Parkinsonian state and I was reading a strategy document from the medical research council in the UK and they had various topics they were interested in and one of them was dynamics of neural circuitry and I couldn't work out why that was there and of course I realize now it's because um that is one particular um condition that people are very interested in understanding these days and this is some Parkinsonism I think is becoming more prevalent um in the community at large so um this is just one aspect where um modeling techniques and experimental techniques can come together so I just wanted to mention that as and you know one can approach that through both aspects so Gato do you want to say something about the dynamics? I was I was rather going to say something about point two because I think that's easier if that's fine okay yeah so so it's about these measurement techniques and and of course it's it's really true that I mean it's really measurements the kind of measurements you can do really determines what kind of science you can do it's often measurements leads theory in that sense so it's it's very important to get get the best measurements you can of neural activity and when you talk about activity measuring activity at sort of like in the in the say inside the cortex inside the brain like activity in circuits on like millimeter scale then it's really just two principles that nature has to offer I'm an offer I mean it's meaning in terms of like the basic laws of physics and one is measuring the electrical potentials and electrical potentials are measured with electrodes and a problem with there's and yeah so the one thing is there's a limit to how the spatial resolution is limited to essentially the size of the electrode contact the the metal part of the electrode say which is maybe then we can get down to one one micrometer a problem though is that you have to you have to have a way to to to get the electrode contact down you need like the shaft and and so if you have if you want to measure many places at the same time you just destroy the tissue you're going through it just like a very very tight needle pad so that's sort of like the limitation on how much you can really record record from also you need to get the signals out which typically is then also require leads so there are these these is like a one a fundamental limitation the other way to to do it is is by optics and then you don't need well obviously don't need electrodes and you don't need leads because it's shining light from the outside and then you wait for reflections coming out from your neural tissue that the problem there is that the light when it goes through neural tissue it scatters so you lose focus but then you can do this trick with this we have two two two light sources so that you get this two photon technique where you get a much higher resolution you have essentially two light beams light beams crossing and it's only in where they cross that you get the excitation of the of the neural tissue so you get to actually activate a very very small part maybe down to one one micrometer a problem there is that you need a special kind of ink molecule inside the inside the inside the neuron and at the moment and you need something that the ink molecule responds differently depending on activity and at the moment you have these markers for calcium so you can measure the calcium concentration and which is sort of cool but the but the limiting that the calcium is sort of the slow part of the neural activation it lasts for hundreds of milliseconds so it's it's difficult you don't really have the rest the temporal resolution that you would like so but then people are looking for other ink molecules essentially so you could get sort of like instead of calcium you could measure sodium that would be better because it in principle at least because then you could sort of get a better time resolution so I think that there's lots of and so that's so there are these fundamental I would say like physics limitations but there are also these new developments where people have because there's one way to to increase the resolution is to not only measure from a smaller volume if you can also activate the smaller volume and that they can do with this optogenetics there's different kind of this particular ion channels that you can inject in which you can then activate by light and so there's sort of all kinds of clever trickery which is being developed so that you can increase both I mean increase the spatial resolution but I don't really see how how you can I mean there's not much else in terms of natural forces to take advantage of I think so uh so yeah so that's so there's this break through technology I don't really know but there are there are these there are these these limits but people are also clever so they come up with really clever things so just to follow up on what Gautas is saying about technology probably the area that's moving well the American brain project is supposed to be emphasizing technology one of the technologies they want to emphasize is multiple electrodes better technologies nano technologies the other are the molecular techniques and I want to just broaden that because while the optical the so-called optogenetic techniques are very attractive because of the speed with which they work in big-brained animals the amount of tissue you can manipulate with them are small but there are some techniques coming along where if you can control the tissue that has the material in it you can activate with chemical means which means you don't need to get the activating material precisely located I think in the long run these are going to be a wonderful toolbox of techniques they are very hard to work with having spent six years in an unseamly amount of money on it for some of this I can tell you they're unbelievably difficult to work with but they move along slow they they are moving along slowly and the young people in this room are in the generation who are going to be able to take advantage of these techniques so either get involved with people trying to apply them with the understanding that you better have another project to do also because they're going so slowly even you guys will get older while they move along and and and the or at least keep a close eye on them so that you'll be in a position to move over and start taking advantage of the tools as they develop let me just say another thing and that is the dynamics we all pay lip service that to the dynamics and the brain is a dynamical time domain system we behave through time and space and up till now the amount of effort that has gone into really doing dynamics in the brain and understanding how signals evolve over time has really not been emphasized and it's another area that I think is a growth area in neuroscience right now and in neuroinformatics because the the amount of data that it's going to take to deal with the dynamics is going to be very large but I think it's a place that's it's something that's coming in the near future this may get there before the molecular techniques because you can certainly put sets of electrodes in a couple of places in the brain where you know there are connections and try to analyze the relation between signals over time so before we give a time to students to ask I want to point out also one fact so I see I don't see myself like how to point it out that there would be any fancy techniques coming really soon but I see that it's more the combination of electrical and optical techniques so what I figured out during the last year is actually that with the electrical techniques with electrodes you impose certain swelling effects in your system we all know that but it now seems that for example when studying synoptic plasticity the results may change depending which technique you've been using and it will be crucial to combine techniques to look at how things are with optical techniques as well not only with electrodes and I don't want to go into details but I just want to mention this then the next question was there I know a biology system for a biology and in a neuroinformatics could I say that is similar with in the electrical point of view a neuroscience systems or it's exist a very big difference between these two subjects as I say is a difference between I try to do a comparison between biology and neuroscience and I don't know how to fit a biologist systems how to run it but in neuroscience it will be last day homologous on on neuroscience biology systems in neuroscience systems well I mean you might say neuroscience you could say neuroscience is part of biology biology is a big subject um were you thinking of what sort of biological system was were you thinking are you have you been working on uh biology systems I know that uh uh they study not ages and they do models map models about maybe gen regulator genes uh and this this kind of things I have seen here also nodes ages but all really a relation between neurons right okay um I suppose most that's a very good question because most of you might say traditional neuroinformatics if you can say that has been you start out like the element the important element is the nerve cell and the synapses between the connections between them but as you say well down below that or are all the components from genes up to proteins up to genes making proteins protein making cells and generally speaking most models most analysis in neuroinformatics is from around the cellular level upwards as it were however um this is something that's going to change I mean the um it's the problem is at the computational neuroscience level as it were that we've talked about here one can write down equations for compartmental models of the neuron and how they interact with each other and one can understand this quite well then the question is how to write down equations for all the proteins that are getting together to make the cells that come together to make the synapses sorry make the uh produce come together make the cells make the synapse etc um this is something that you might say is more complex one doesn't know how these systems are controls how the genes control the proteins in the end one's going to have to make very strong assumptions about what's you know you know one can't write down a great big equation about everything you know you have to sort of somehow um make some strong assumptions about what is the important element so um I suppose this is something that hopefully as more information about particular cellular pathways comes available then we will one will be able to um incorporate sub-cellular parts components in one structure itself so it's you know it's a different level but it's something that has to be taken care of I can also comment on that I briefly mentioned it in my lecture also that I think now there's all these studies where they look at these GWAS studies genome-wide association studies that look at the genes of people and groups of people and they see what are different statistically between these on the genes of this group of people compared to that group of people and then of course they hope maybe that more of the features or the diseases that you're interested in were just tied to a very few genes one or two genes maybe but it turns out that typically they're not they are tied to many genes so it's very polygenic meaning that maybe like I mentioned yesterday they now identified 100 genes really correlated to whether you have schizophrenia or not so it's just impossible to start making mice models which turn on and off one of these genes and sort of try to do it the experimental way so that's why we I mean we are also involved in a collaboration with one person in schizophrenia consortium where I started to model these effects that they see these hits in models because then we can study everything at the same time so but so I think I think I think one of the really one of the way that one of the well direction that neuroinformatics and particularly computation neuroscience also then is going is into getting more molecular because then it gets more information about what is important also from these gene studies and I also think that after when they've sort of found found all these good gene studies they'll sort of these the people who do work on the gene say that this is not going to work we have to do something else and I think they they have to turn to neuroinformatics to to start making sense of this data they they get so regarding what you said earlier so being having it having tips that have I guess five microns somewhere between five and 15 microns how much damage can we really do to the cortex I would assume it would just go through easily yeah so without yeah so I think yeah I think my well at least they have these special techniques of injecting these electrodes and and where they do it in like the particular kind of well jerky way and and I have the feeling that that that often they that actually they then they do maybe less damage than you might think but there are also some long-term issues I know that when the electrodes are in for a long time which you'd like maybe patients that they start getting this dead layer around electrodes or these biofilms but there are also other ways to build electrodes where they actually in case you need to be sharp to put it into the brain but when they're in the brain they can sort of you know doesn't need to be I mean sharp you mean it has to be stiff to get into the brain so I know that people are making these electrodes which are covered by sugar or something so that when it is down in the brain actually the sugar melts away and the an electrode sort of gets gets just like like like little springs sort of gets out so so there's all these different techniques also where the people are working on better electrodes also in terms of long-term use and less damage for the brain yeah and I wanted to still mention that it's not only the damage that it starts with the neurons but it's also the damage that it does to the other neural components which tends to change the way how neurons behave but it's a bigger topic and now we move on to the next question I think. It does actually relate to the question about the biological systems just now that we had and I'll read you this if I can I'll read it on here because I can't read my glasses um so the first question is um recent developments in biologically inspired neural networks seem to capture some of the cognitive capabilities of the brain despite ignoring a substantial amount of biological detail how should we look at the possibility that these algorithms already replicate many of the mechanisms that are relevant cognition and that we might be overemphasising the importance of cellular and molecular detail so that's one question the other question is there are a variety of tools available for carrying out detailed simulations at the neural level how much exactly is left out not only in terms of brain structure but also in terms of detail could these missing parts play a role in cognitive processes and what are the plans if any to include them in future simulations so I take those as questions which have two different opposite views one is that if you want to understand say how we function you need to you don't need to look at the detail the other question is well you do need to look at the detail and this implies I say it's related to the biological question because then going down a bit further how much do you need to know about the molecular level to understand nerve cell activity and so on but I think the answer is one these my my answer is that one needs to know you know about information one needs to have information every different level and then the question is does it then turn out that certain aspects at a particular lower level for understanding a high level function are not necessary and you can only do that by working out developing models at different levels and define that some details aren't necessary so this is a question anyway so it's an issue about how much level do you need for understanding level of detail do you need for understanding and I think people have already asked that answer that question saying well it depends on the question you're answering you're asking and trying to get an answer for so that's my fourth introduction on to that thing mylion do you want to say something about this yeah so maybe maybe I mentioned like there are different approaches already existing in the world so there is this approach by Professor Chris Elliersmith who is developing this Ningo simulator to to kind of model the cognitive processes like what is emphasised here and it is true that they are able to to very nicely produce some cognitive phenomena and and some behavior but then at the end I would be asking that I don't think it they have shown anything related to plasticity and how you learn and how you kind of like if you think about human and and even rats and mice like how they can survive in very unexpected circumstances so then you can you can always ask that that whether this is enough what we can it is really nice and it's an important step but at the end like since we don't know which is producing this very plastic behavior that we are able to manage that we are able to manage in this world so I think we still have to look at all the details as well but at the end it might be that like David pointed out it may be that at each level of detail like molecular cellular network and so on we just pick up those details that are crucial for for the question that we are asking but for sure when we are trying to understand diseases I I think they don't arise or certain aspects of them arise at the dynamic level but the the starting point is somewhere molecular cellular I'm sure for that so for that reason we at least have to look at the details there was a subject called there is a subject called artificial neural networks which was around in the 1980s mid 1980s onwards and there were people if you don't know about it people developed very stylized models of the brain in terms of which is in simulation usually in terms of little units processing units having connections to other processing units they worked in parallel and there's a lot of claims about how these networks replicated cognitive functions so therefore these were artificial neural networks so they had the capabilities of the brain and and they were wonderful things and of course this is not quite right I mentioned that because deep neural networks are really just our modern versions of these some of these neural networks so so there was an example there was a case where you know you could it was thought that one could abstract with abstract models understand cognition but then I think the view now is that this wasn't quite a bit of hype basically to get lots of money and so this was a topic that went around for 20 years or so it's evolved in another direction machine learning but it's but it was there so anyway and maybe I still want to add like one example how I myself for example think that we need to look at the data so when I for example look at the models for synaptic plasticity and trying to sort of produce some sort of at least short term plasticity neural networks there are nice equations that you can plug into your system just one or two equations but then at the end when you really start looking at it in detail there are many regulatory and model modelatory systems and many mechanisms that sort of the more you go to longer term that regulate the system and you start wondering whether they should be taken into account somehow and now they are not taken into account at all and they go to really deep molecular level and not only interactions between neurons but other elements in the brain so it's clear to me that you have to also look at the details but of course some of us will continue doing the more the more the kind of system and cognitive level and maybe like for engineering purposes like we had the presentation today by by by one of the awl i si chi, si chi and then maybe you first and then Mark Oliver. Yes so I know a bit about the deep neural network field so in many cases the deep networks are used to solve a particular problem but if you're trying to build a machine say that has a capability of doing many different tasks and it's not clear yet how the the deep networks should be combination together so that you can actually create an organism that can interact with the world. I mean usually that is what's there sitting inside a box. And then you ask them to classify images or classify audio but it's not the same thing as now that I've classified it while how do I use it to understand my world so I can act appropriately on it? And so there's still a lot of stuff to be understood. Even the nangle framework that you mentioned, the spawn network. So that has no learning yn cael gwybod, ychydig yn ffawr, mae'n cael ei wneud o'r tawch ar gyfer y cagwytl ond rydyn ni'n plastigwynydol yn y system. Felly, ychydig yng Nghymru, rydyn ni'n cael ei ddweud o gyfer y newydd yma o wneud o gyfer y ddweud a'u gilydd ychydig yn cael ei wneud o'r cydweithio eich cyfrifyniadau, ychydig? Maybe I'll say it anyway. David nicely said that there was this field of neural networks and I think now we can just subsume it under machine learning and deep learning is I think an extension of Boltzmann machines which were around in the 80s. These are basically distillations of neural principles but there is no way backwards. If you do something with them, you can't really go back to a neural correlate and... ...without wanting to do injustice for this and sinful, useless work... ...you take a machine learning network and a compiler that turns it into a spiking network. Mae'r gweithio bwysig y byddwch ni allu y cwsmetig, a llunio'r ymlaen gwneud yn ysgrifennu'r cyd-dweithio ar hynny, yn ysgrifennu'r byd yn ysgrifennu'r byd yn ysgrifennu'r cyd-dweithio'r byd, ac mae'r adrodd yn ysgrifennu, ond y dyn nhw yma'r ddweithio yma yw digonol. Felly, ydych chi'n ddweithio'r ddweithio, pan yn ddweithio'r byd yn rhaen i'r byd, ynghylch yn ystod yn ymddiad o'r ddwyliad drwy'r llwytu ddweud, ond ydy'r cofnod nesaf yw'r cyffredin yn ei ddweud yn ymddiad. A dyma'r cyffredin sy'n ystod yw'r cyffredin sydd wedi'u cyffredin yn ymddiad yr hynny yn yr ymddiad, oedd yn cyffredin yn y cyffredin cyffredin ymddiad. A dyma'r cyffredin yn ymddiad yn ymddiad. yn y 90s-80s yn y nesl, unrhyw clyw, byddwn yn y Llywodraeth, yn ychydig yn y Lywodraeth. Felly, y cwysig mewn rhai o'r ffordd rhai o'r rhaid ac, dyna... Y cyfnod hyn y dyma'r wneud o'r stori Llywodraeth, maen nhw'n bryd i'r prinsio ar y plasitysid a'r ymgyrchu, ond ydych chi'n i'n rhaid i'w mwynhau i'r byddwchau fel y byddwch yn y blynyddol. The best example in my view is reinforcement learning, which I think many believe is underlying any type of action learning in the end. But if you look at how fast we can learn anything where there is sufficient amount of reward involved, be it money, be it sex, be it food, and compare that to any network, there are orders of magnitude difference between any agent that has to learn a non-trivial problem and any human that has to learn a non-trivial problem. It's really several orders of magnitude of repetitions that any agent will need in order to even solve the simplest thing that we learn in one or two trials. Well, since you started talking about these deep, deep networks, because they might not be like the brain, but it just reminds me that actually one of my previous students from a lab started a company now in Silicon Valley where he wants to use deep brain networks for really investigating big data and so on. So, even though they're not useful for understanding the brain, they are useful for maybe, it's actually called Nirvana, this company. So that's a sort of a Nirvana with an E and not an I. But anyway, but I think also since we are discussing about this thing of companies, because every year this is a computational neuroscience meeting. Computational neuroscience is a section of part of neuroinformatics. And then they had this career night for young researchers. And a couple of years ago then there were these people from the brain system, some kind of company in San Diego who is a subsidiary of Qualcomm, which is this huge chip company. So they are hiring neuroinformaticians to try to implement some of their like the neural codes into chips. And the basic idea is that they want to have chips in, they want to make the internet of things. They want to have chips attached to all kinds of devices so that they can communicate, have smart houses and smart all kinds of things. And that, a key thing there is really to have low power. So there's also like this other, I think other, I'm not quite how successful this will be, but at least they are investing a lot of, well at least money into hiring people to explore this thing. So there's many, there are also job opportunities outside academia. So you talked about the deep neural networks and you just, but what about the other one? The second case, the hierarchical temporal memory, which as far as I know is slightly different because it does add some biological detail. Well, I wouldn't say biological detail but maybe it's more similar. Yeah, again, I mean that's why I use the word machine learning. These are all machine learning techniques and they use some biological principles. But it is, you know, it's like with a model of flying. A bird and an airplane, they both use some ways to create an upward force. But that's about as far as the capability or the similarity goes. The problem is it's a one-way abstraction away from biology. That is, if you can learn, so these things have been used to learn about information processing from the brain. But it doesn't work the other way around. You cannot use this tool to learn anything about information processing in the brain. You could say the same thing about spiking neural networks in a sense. There's just one step closer to... No, the question is whether you still have something left that you can measure. So any model depends on the observables that you have. If your system doesn't have any observables anymore that you have in both systems then you cannot compare them anymore. If you find these observables, then of course at that level you can compare them. You do have them in your archival temporal memory. Because there is simplicity there, the number of synapses, so it replicates the cortical columns. You have something called the potential synapses and then it decides... Can you measure potential synapses in the brain? The architecture is something called potential synapses which dynamically decides basically how the cortical columns are connected to each other. Then cuts or ads and you can do statistical analysis on this to see how much it decides to connect to other cortical columns depending on the stimulus that you're presenting. So there is something that you can get out of that that might tell you something about the brain. Maybe less than spiking neural networks, but then again it's just a spectrum. If I was handed a hierarchical temporal memory... I don't actually know exactly what that is, but presumably what I would do is I would test it out against cognitive functions that it's supposed to replicate and then I'll be looking at the elements of it and say what elements in that model correspond to what elements in the brain. So how are the computations done? You mentioned reinforcement learning there. It's a question of similar cases. In these models they have to do in some cases quite complicated computations and so how is a synapse or whatever or another cell going to do this computation? So I'll be looking at those sort of aspects. What phenomenon it accounts for and what's the implementation. Of course it might do quite well, but then the question is it's not how the brain is doing it. Even if it failed the second test it does quite well, but in fact the implementation isn't there. Obviously it replicates what the brain does, but it doesn't do exactly what the brain does because the correspondence isn't there. I was about to say the same thing. So you can compare them at a performance level, but since the one is not a mechanistic model of the other one, you can't compare the components anymore. So the relation to the original substrate is lost, and that's the problem that you have. So it could give you ideas though. Two parts of this. One is I agree that you don't learn much about the brain by studying how the brain implements something, by studying an artificial neural network. You have no idea whether it's implementing the same solution. Probably the best example of that is the computers that beat the best chess players in the world. I don't find it particularly interesting that they do that because it's obvious they're doing it through this comprehensive search mechanism that is nothing like the strategy that a chess master uses. Not even close. We don't understand what the chess masters do, but we do know they prune the tree in a way that these computer algorithms don't even come close to doing because they look at the board and they know there are certain things they shouldn't chase and they don't. The other side of the coin is to look for things that are principled and perhaps an example to me of that where you learn something about principle is look for different species of animals that solve what appears to be the same problem using different ways of doing it. One of the comparisons that was always neat for me to think about was electric fish who used their electric field as a sensory device. These are fish that live in muddy rivers and they exist in both the old world and the new world. But interestingly enough, one, and I can never keep track of which is which, one of them solves it in the frequency domain and they have sensors on only the head and the tail and if you record from the neurons they feel disturbances in the phase relationship of the electric field when it's disturbed. The neurons are sensitive to phase relations. The other class does it in the time domain and those animals have a row of sensors going the whole length of the body at quite closely matched spacings and they're measuring the time differences between the electric field disturbances at the different sensors and it's interesting because they've solved exactly the same problem but using two different physical principles that we understand and you can see the reflection of it in the neuronal system. So if you can be really clever about the systems you choose to study you can learn a lot of really nice things about the nervous system. Are there any more questions or comments on this topic? So a lot of the last comments were about whether we can learn something about the brain from these artificial neural networks used in deep learning and so on. My question is a little bit in the other direction. Can artificial neural networks benefit from some of the biological details of neurons if you were integrating them into artificial neural networks? I showed an example of how we had a new computation primitive which is the winner take all and that would never have come out if we didn't have this anatomy and connectivity of the different types of neurons in the cortex. So are you asking where these computational primitives are used? Are these the ones that are created because of the information that you get from the nervous system? Maybe Mark was ready to answer. So I think the first is to realise that this deep learning is nothing but an abstraction that has been made in the 1940s from the brain. So it already has benefited. That's the first answer. The other point is that since then we have learned a lot of things about how connections work and how plasticity works and that hasn't really found its way into the machine learning community. Some of it does. Reinforcement learning is a good example which finds its way into robotics as a way to learn by imitation and by example but many things are not understood currently and it takes a lot of time. So deep learning in my way, so at least the Google paper, the way I've read it is basically you apply the computation power of Google to an algorithm that was already there. What is difficult is that these things are unsupervised learning algorithms so they will learn something but you don't really know what they learn. Whereas if you look at what happens in animals you have this distinction between supervised and unsupervised learning. Unsupervised learning is mostly interesting during development when you basically set up your visual system but later when you go to school and when you learn from experience it's basically some sort of supervised learning that you have either self-supervised or would be reinforcement learning. So if you do something and you get a reward in what form however then you do it even better the next time or somebody explicitly tells you to learn something you can memorize it immediately. And these things are really poorly understood. So the Google system learned cats but would they be able to learn, I don't know, cars if they wanted to, particular cars if they wanted to. I mean this is still very, very hard to do they are of course any machine learning they are still looking into the mechanisms that you have in the brain and it's an open field. And as I mentioned if you look into reinforcement learning the algorithm currently needs 10,000 to 100,000 of trials to learn the most trivial task whereas we do it in two or three or five trials and that is completely not understood. About 20, 25 years ago Jeff Hinton who is one of the big machine learning Google people who has developed many of the neural network algorithms commented that the existing neural networks at that time had the brain of a slug or rather he said half a slug. So I suspect he would actually update that to a little bit more than a slug but certainly there's a long way to go for a deep learning algorithm for the existing algorithms to come to the capabilities that we have very good on particular tasks but not as a general purpose system. That's my view anyway. I think we're going to close very shortly. Is there anything, any other burning question that you need to have answered right now? It's just that when somebody is interested to enter a neuroscience or something once is a lot of computational neuroscience mathematical neuroscience, system neuroscience but most of them overlap but how do they differentiate or maybe if you can give the ontology of these branches is there how it works usually because it's very confusing if somebody wants to enter what course should I do, what is the field actually. Three terms there but that probably won't help you in general. Systems neuroscience is a subject studied by experimental neuroscientists and people who write programmes about the model systems. Computational neuroscientists tend to use the techniques of computer modelling. Mathematical neuroscientists tend to try and prove theorems about how the brain works which may or may not be also backed up by stimulation systems but those are just three terms. I don't know of any other ontology apart from that. Mark Oliver looks as though you're raised to... You have answered that. I think once you read some papers or quite often once you say in Nature Neuroscience Review journals one finds introductory topics to these various areas. Mathematical neuroscience I can say that there's a journal called the Journal of Mathematical Neuroscience which has come out from a lab in Nottingham in the UK so by reading some of those papers you can see the sort of or trying to understand some of the papers you will see the sort of things they deal with so I think it's a question of trying to in this respect when there are no ontologies available looking to see the sort of topics they cover but I agree it's sometimes for the newcomer it's difficult to disambiguate the various terms. I'm sorry? Argynusly model. Argynusly model. HH model. Argynusly model. What would that be? That would be a model, you would say that's computational neuroscience one would say. I could say matematical. So at some point there's a lot of overlap. Sure, absolutely. I mean you might say when the presenters did say that there's a mathematical model many mathematical models don'ts may not have attention to mapping their elements of their mathematical system into the underlying biological systems and Argynusly went sort of halfway there so it's a sort of mathematical stroke computation on neuroscience model I suppose and they did computations because they had a hand calculator that they wound and got numbers out of it which is their simulation in this case. Yesterday Sir I showed one slide that computational model is one model from the mathematical model so the equations are just models using algorithm nothing else. I think we should finish the discussion there but I was going to say as a final remark is that we haven't been able to cover all the questions some of the questions that you posed have been answered in the talks but some of them may be not but I would suggest that if you have questions you want to follow up you should mail us I mean in the spirit of us sharing our knowledge with you I hope we will all be receptive to answering your questions so if for example you've been to a talk where there's a particular point you didn't understand mail the instructor and ask them so that's probably a first point of clarification of all the things that we couldn't address because we know that there are lots of things we couldn't address so that's I think the end of this session thank you very much we have a final talk so Garty is going to wrap up the proceedings thank you