 So the first question is about the rule of interactions between big data and the model driven approaches in brain disorders. So actually, this morning, one speaker also mentioned this question, data-driven approaches and hypothesis-driven approaches. So I think, so in the edge of big data area, so this question is very important because it might be conceptually related to some future studies. So we're interested in big data, but the main challenge for us is not just to get... So the main challenge for us is not just to get data sets, but to have complete data sets that not only include brain connectivity, but also metadata, age, IQ, some cognitive measures that include different modalities. So ideally, we would like to have data sets that include functional connectivity and DTI structural connectivity in order to analyze the data. So the challenge is really to have this integration between different modalities that are measured. I think one of the things that was very clear from listening, particularly to epilepsy, and I think it's also relevant in psychiatric diseases, but less so is the need to look at the individual subject, and it sort of is counter to you have to set up models that will be generalizable and this notion of reproducibility, but in fact, the devil is in the details with these individual patients. And so I really like the idea of having the simulations so that you could make some predictions and then could see which patients it worked for and which patients it didn't, which could be useful in thinking about why didn't it go the way you thought when the model seemed to be pretty well behaved but then you have exceptions. So I think it does go counter to even all the big discussions about there's a theory of everything and that somehow we have all the data together and any one experiment has to be replicated by someone else when in fact the individual variability may be the source of the variance in a fundamental way. Yeah, I think I absolutely agree with what's been said before. On the other hand though, I think there is definitely also value in these big data approaches. I mean, I'm absolutely a fan of trying to understand and observe very closely at an individual level as well as at a group level what actually the changes are that we see and not try and do the machine learning approach and be completely blind to what's there but just see okay, okay, there is a difference and then we dig into what might be the difference. So I think that's the two sides that you've been discussing, the one on the one hand the hypothesis driven on the other hand this data driven approach. So I mean I think we've been doing this hypothesis driven approach in the past for quite a lot and I think this data driven approach is starting to come in and I think there's value in it but I think ultimately the really key thing to make a real contribution is actually to link the two and I think you ultimately need to understand what is actually driving the effect in your big data analysis as well as you need to test your hypotheses with like actual data and big data probably as well to actually see you might have captured 70% of the subjects and you're describing that well with your model but what about 30% of the rest and they might be a very heterogeneous group that you might be able to capture with a big data analysis better than if you just have one individual of each subset. So I think there's value to both but I think it's probably just keeping both in mind and valuing both is probably the key. Yeah I would agree with that. I think we're very lucky in epilepsy in that we've got access to so much data which is quite rare in other neurological disorders but I think the key is to not get bogged down in that data actually. So much of it is similar to other types of data and we can easily get lost in that and so I would go along with what you're trying to say is again this mechanistic understanding of what's driving what we see in the data is going to be really important going forward to make sure we don't get lost in it all. I think it's important that we draw a distinction between two things which have just been discussed here. Big data versus hypothesis driven and individual versus cohort. They're two different issues. Big data is how we can explore the individual marginal characteristics of any subject in a large cohort. Big data doesn't imply that we are only looking at the average brain or a cohort study. Big data actually is allowing us to look at the individual. It's probably the most important thing that it can do and then it allows us to generate hypotheses which can then be tested in a classical hypothesis driven manner. So first of all they're not mutually exclusive. They're very complementary and we have to be very careful not to lump big data into the it's just a cohort study then it's not. Since all these epilepsy people that work together are in the same place I was just wondering given some of the discussions over the time about plasticity that occurs and with epilepsy what is it about you think you have a lesion and how the local circuit and the brain kind of big brain changes over time and how do these tools you're developing can you actually test that? I mean it just seemed there might be some really interesting ways to do that and I was wondering what some of the clues were. Yeah I was wondering that in your talk actually as well already when you were showing the effect immediately over a couple of weeks and then the eight weeks effect and what the plasticity changes there were. Yeah it's an interesting question I think in epilepsy I'm not sure if there are many epilepsy clinicians in the audience who might be better placed to answer this question but from a modelling perspective there's been a lot of attempts to of course incorporate plasticity changes and how learning actually impacts epilepsy and there are very tight links and epilepsy over years actually has a dynamics as well implicating that there are probably plasticity changes underlying these changes over the years. Models can and have to some degree tried that and I think we definitely need to do more in that field basically is the answer. Yes so related to that in 2011 we were looking at motor lesions it's not related to epilepsy but if you're lesion part of the motor cortex in the computer model you see increased activity at the peri-lesion area and that's something you also see in the real experiments in the real cases but one question is how do you get very slow changes so those are changes you see at a very fast stage but then you have clinical cases where people have a car accident and then ten years later they have epilepsy so you have very slow developmental changes and at the moment there are no real models out there that go over such a long distance so there is STDP looking at very fast changes at the synapse level but if you think about what are models and what are mechanisms over five years or ten years there are very few models out there it's one of the reasons why we are interested in development as well to get more information about those long term changes in the system. Well it's kind of funny because I mean depression and epilepsy you can argue that they're both limbic connectivity disorders and certainly the unsinnet fasciculus and your lesions in the hippocampus is sort of wondering how kind of volumetric or glial abnormalities affect these connectomes and your models and whether or not you talked a lot about the white matter, the connectome system but how does the volume fit into that or actual structural lesion. So there's some nice work out of UCL Gavin Winston and John Duncan who've looked at the white matter changes following surgery and to see how that longitudinally changes so they do a pre-op DTI and then a post-op DTI and then compare the malination essentially over time and you get alterations far away from the surgical site which may or may not relate to seizures coming back on on the time scale of years. So I presented this in quite a simple manner but some patients will have seizure freedom for 3, 5, 10 years and then the seizures will come back and perhaps that's related to this far away plasticity. I think it's a good idea for a future study but the data's not really available easily. You know it's just so funny because we've got that we're stimulating and there seem to be this odd I mean we can't put them back in a scanner to see if the white matter changed because we've got the problems with the device and even if we can get them in you've got corruption of the signal kind of around where you're stimulating but I wonder in patients that get the neuro-paste device or are getting thalamic stim if that recurrence rate is the same and if stimulation versus lesion ends up affecting the connectome differently the fun to kind of see a few. It'll be interesting to see. Nobody's talking to that. Okay actually I totally agree with the comments from our speakers. So actually big data and relevant approaches such as connectomics and the little modeling provide great opportunities to make some findings which cannot be obtained by traditional hypothesis dream approaches. So it's very important to combine the different study strategies to do some basic research for brain disorders. So let's move on to the next question about data collection and data sharing which is very important for this area. So in the age of big data for neurology and psychiatry as a kind of data we collect currently based so I know there's many open data set have many kinds of information such as clinics, behavior measurement, neuroimaging and genetics but a very few open access data have electrophysiological data such as E-gen, MEG or ECOG data. So about this data sharing so we know it's very important for brain disorder studies especially for reproducibility using the large scale data especially from different sites and different researchers so we can see how much reproducible about the findings across different studies. So actually many scientists are making great efforts to make these data available but most of the data sharing are mainly for some healthy subjects. So very few open access data are related to brain disorders so what a big challenge is about this sharing in brain disorders and comments on the questions. I think it was me partly who contributed to that question essentially it's an observation and maybe it's my limited view but it's essentially an observation so you actually have maybe one database of electrophysiological data but there you just don't have any neuroimaging data that you would like like all the tractography matching with the actual MRI matching with the actual CT matching with the actual ECOG sites so that does not exist to my knowledge. Some centers might have it but there's just no open database of that on the scale that we see for instance for the human connect home project etc. You see in Alzheimer's disease quite the opposite, you have all the genetic data all the AMAR based data but you have no electrophysiological data and I can understand that it's kind of motivated from where the clinical field has come from but basically the question was is there actually a value in doing actually a multimodal data or setting up a framework basically for what sort of modalities we should be collecting in diseases in general regardless of where the disease is coming from whether it's disease coming based in electrophysiology or based in AMAR diagnosis. It's such a critical issue in sort of seeing how these large data platforms are trying to be set up to handle all this data I mean I'll just say doing clinical work that one thing is you have to think about it in advance the clinical data if it relates in any way to the clinic at least in the United States has a protection that becomes onerous and so all of the kind of shareholders or collaborators have to be thinking in advance how to create a platform that allows that to happen I just actually before I left and it came from the epilepsy people the engineer in my lab was going to help the people in the epilepsy and they've been collecting research grade data that has its own protection but a new clinician comes in and wants to set up to get research grade clinical scans patients have to pay so even the healthcare system influences how data is collected so you either have patients should be coming in the door to have common data elements that could be used for everyone there should be a way that it can be open to the clinic to have clinical reads and billing for radiology but that then goes through some chute that gets de-identified and put into a centralized area that everyone shares it's good for the patient because then they don't have to do multiple things the hospital doesn't freak out because something isn't de-identified they've got to figure out what they want to be willing to share and that's with having a totally open approach to people will work together and do more than anyone will do alone but now someone had to pay for get that research data and wants to reserve the right to do what they thought of you don't have the resources to create the infrastructure you want I can drone on and on to fail and then you get to the end of an experiment and realize boy I wish I had X and then you have to start over with the data so I think that having these examples of it's now pretty clear what are the common things that if we have them for certain diseases people could work to try to get them and agree to agree but I think logistically it's hard in the states now there are certain kinds of grants in patients if you don't collect your data like the the connectome project you won't get funded so there's a certain prescriptive thing that may or may not work for hypothesis driven things but that will give you a built in set of controls so I think people are trying to figure it out it's just very clumsy right now and it mostly comes down to you know someone's got to put it into a place to maintain it and all the things that everyone would want which are these clinical elements are the things you have to protect and the rules in the states as soon as you go to anything that is in the clinical record it all changes so it's a it's similar here in the UK so we have the 7 year project for implantable device for epilepsy patients and one person was spending a full year just for the ethics for the MEG scan and MRI scan so not implanting a device it's just non-invasive scanning of patients and one year to go for all the ethics and all the paperwork so it's really prohibiting not just sharing but just data gathering it's difficult but also in general in addition to those large databases it's very important that individual labs keep a good track record so I was looking into one model published in 1991 and contacting the PI about that model and he said well the PhD student left I have no clue how it works so it's really important to keep track of that or there was the C. elegans data set that was published in a book in 1991 and it was basically 50 pages of tables and a five quarter in floppy disk so it took me two months to get the book floppy disk didn't work so thankfully I was somehow getting the data so it's really important for individual labs as well just to keep the data and to keep the models accessible over a long time even if there are no existing databases yet I wonder if I could change gears a little bit and come back to some of the beautiful connectivity work that we saw today and its relation to disease I've done a fair amount of connectivity work a lot of it with the young her over the years and at times it becomes disillusioned because every disorder has reduced global efficiency reduced local efficiency reduced small world so you start to feel like it's not very discriminant it's not very useful do you have a sense of whether there are better metrics from the graph theoretical world at this point which can be more disease relevant than just the usual ones that we all use there's one reason why we want to look into dynamics in the network rather than just the structure of the network to see how dynamics are changing in different parts of the brain how brain rhythms are changing but just in terms of structure you can look at network flow so basically some measures that are more about information flow in the system rather than just the basic structure in the system if you have a hub node a high degree node you might say well this node might be important but you could also have a node with a very low degree which is just between two modules which is very very important because it's connecting the two modules but it will not show up as a hub so if you look at flow of information in the system rather than just the basic architecture I think it will give you a bit more information and otherwise moving on towards simulations and dynamics actually it's showing network disruption just like Ellen said so actually it's not surprising because we know most of the neurological and psychiatric disorders are related to disruption of cognitive functions because especially a variety of cognitive functions because we know that's most of the cognitive functions are related to a network organization so it's why we found very common findings in neurological and psychiatric disorders using graph theoretical approaches but the thing is even the some common findings but there's still some difference among different disorders for example they have some different some disruption of different half sets and different set of connections possibly there are these kind of information are very important to understanding and treating the different disorders I mean I've always kind of found that the graph theory stuff is very enticing but I don't know what to do with it on the other hand if I could set up in normals setting up a network and know that just using our examples as a case study what is if you if you altered the efficiency of just in the incident fasciculus with or without abnormalities in the hippocampus what would it do to global efficiency because you have atrophy in depression in hippocampus and you have gliosis in hippocampus in selective temporal lobe patients and so could you actually use clues from pathology findings to actually know does this efficiency measure actually mean something you can link to something else I mean I think once you're in the domain you guys are in where you really want to look at network physiology but back to like what can we do kind of for everybody with what does efficiency mean I think that simulations at the functional connectivity level cross-platform without physiology could be informative and kind of direct traffic rather than global whole brain metrics of global efficiency if you have regional efficiencies that allows you to discriminate different subtypes of disease and epilepsy would be a very obvious one which would benefit from typical subclasses of epilepsy can I just make a point Alan made the question you asked was whether there was anything that was relevant to disease whether there was a distinction to be made between relevant to disease and specific to disease so you may be getting information from very generalized connectivity data that informs you about the progress of a disease providing you know what the disease is and if that's the case then maybe the temporal narrative is very important and catching where the person is in the course of disease not necessarily by the clock but by a biological clock might be quite important and I think in progressive diseases people don't necessarily take into account how long the disease has been there as a very crude marker of where in the story of the disease they are and clearly early connectivity changes are much more likely to be primary whereas the secondary ones may be some of the cognitive reactions the failure of the brain to cope if you like with the consequence of disease just talking about disease progression so of course the problem is that people only show up in the hospital at a very late stage everything about Parkinson's disease many cells are already dead when they show up in hospital yes yes yes so there are two options how to work on that one option that we are currently looking into is basically simulating development so chain reconnectivity and testing under what circumstances do we get into certain brain diseases because then we have some idea what happened before what are potential mechanisms the other option is to look at the UK Biobank so as you might know it's a measurement with DTI and functional connectivity of 100,000 healthy subjects 40 to 60 I think and the idea is that some of them will later on in 10 or 20 years start to develop dementia and other diseases so you can go back to the earlier data set to get an understanding how the disease is progressing over time but of course we are just starting to get the data at the moment there are 15,000 MRI data sets there are 6 centers in the UK measuring data so it will take some time before all the data is available and of course then it will take 10 to 20 years before you see diseases so it's a long term approach you make a really important point even in how we have tried to approach collecting our depression data I've actually been at Emory longer than I've been anywhere and I've managed to actually have the same sequences have everything stored and the different cohorts that from my cohort I showed was actually a never treated cohort whereas the pet cohort was a recurrent depressed cohort and the DBS cohort is obviously very refractory and so now we're kind of going back instead of asking the original question is to try to get an idea is it an accumulated hit over time with multiple treatments or is it just an accumulation over time you know, Evettelina had shown that hippocampal atrophy was related to how long your depression was untreated so I think you know you can start to set up these questions if you've actually put your data in a place imagining that you might have someone else will have a better idea than you did when you started and actually what we're seeing in our data is everyone has a functional connectivity of the ventral medial frontal singulate connection but as you get more recurrent and need other treatments you start to pick off other connections and there's accompanying white matter abnormalities so I think that you can kind of set up the model based on a clinical impression and actually look at the data to see if it follows because you've got all the data stored and we're trying to figure out how to do that so it will be accessible to people's questions. Okay, I'm very intrigued based on something that you said about simulating development and you mentioned it also at your talk and it's very intriguing as I thought that perhaps we can reach at some point the level of knowledge that we can simulate development and see the connections with disease and I remember some of the papers that you wrote with Klaus 10 years ago or something on development based on you had a very simple model of development based on distance so I'm wondering like right I'm wondering to what extent you think we are even close to that goal of actually simulating development considering pruning and considering all the different effects that happen in adolescence I mean are we even close to thinking about that or is it completely science fiction basically? So it depends what level you're looking at so in terms of pruning most of the data comes from animal studies so in terms of of humans you can't do lots of post mortems during different stages not in a systematic way so of course you have some embryonic data where you can look at what happens but there's not enough data to really go down to that level I would say and especially if it comes to changes that are based on training for example on learning I don't think we can really get that local change in connectivity in humans at the moment what we can however do is to have different assumptions how brain connectivity is changing at the global level so global level can be fiber tracked it can also be features like radiation and cortical thickness so we get some idea how those features are changing over a couple of years and this is something we can put into a computer model but other things are difficult to measure in humans and therefore cannot be part of a model. Last question so do you think how far from the basic research of big data such as connectomics and the network modeling to clinics is very important? So in terms of getting some understanding what changes in patients I think we are quite far. In terms of understanding how to treat them based on those changes we are not that far yet so in terms of really understanding so let's say for one individual patient you see that 10 brain areas show differences in connectivity we still don't know how we then should treat the patients the brain should we target what is the best approach for that particular patient I think an even further challenge is convincing clinicians to change their procedures so in terms of say in our case epilepsy surgery clinicians have certain procedures that work relatively well and they use those procedures for the last 50 or 100 years so convincing clinicians to change their approach is difficult depends a bit on the disease I guess but it's something that takes potentially longer time. Change is slow what I've found is that in psychiatry it kind of pushes the belief system to actually offer this approach it removes kind of the autonomy of the clinical decision making when there is no ground truth so you have to actually take the data and demonstrate it and demonstrate it again until they can't look away or alternatively demonstrate that it's a proof of principle and then find a surrogate that actually can be implemented there is a real schism between how psychiatry wants to solve a problem and see this kind of community and how neurology does and it fundamentally comes down to the way in which business is done and so I think we just need to plug along so that you can't see it other than the way it is but it's a battle even down to when you present a decision tree to be tested it bothers people and that's just religion so you just have to have adequate data and deal with it that's how any shift in how medicine is done in psychiatry I mean people were very very slow to adopt medication you know the psychotherapists when they started using medication actually hid I mean you got kicked out of the club and then everyone realized that both was better so I think this is just a progression it's not just in medicine so this quote from Max Planck saying people don't get convinced by new theories the reason why new theories are damaging the field is not that everyone gets convinced but because the proponents of the old theory die out so it can be a very long process actually the validation is very important especially for imaging data for electrophysiological data as well as relevant approaches that many influence factors including diagnosis analysis approaches different parameters so actually we don't know the ground truth right now so it's very important to have more subjects and long term data so it's very important to do various analysis so I think it's very important right now yeah and I think finally especially from the computational side I think there is also a certain responsibility from the computational people to actually communicate their findings in a way that clinicians can digest it and take it on board I mean it's great if you present there's a higher efficiency here and here and that's it I mean you can't just stop there, you need to go a step further extend an arm in a way to say you know and this could be used in such and such a way to propose the study that would be useful to then bring this into clinical practice rather than just stop there and say okay these are the network changes to what you want it's a communication I think and an effort on both sides I think that needs to be done and it's happening it's not it's just slow I think when if it's not about the upside if you get a 70% it's going well and there's no harm then the added value isn't the benefit isn't clear but I think in epilepsy where you've got a non-response rate understanding what surgical procedure is best for a given brain state has real implications and I've noticed that our epilepsy surgeons they like trying out new things but they don't want to do things that are dangerous so if you have got a biometric that tells them in the right direction I mean they're definitely going to listen but until you give them something that has reliability and is well communicated they don't have any reason to change and I think that's why our measures have to show both the added value and actually avoiding something that is bad and then I mean clinicians are trying to do good they're not that stubborn but it's got to be better I think it's really we can see it in our field that ideas that are easier to understand are more widely adopted if you think about graph theory everyone is talking about hubs and small world and modules very few talk about spectral analysis of matrices which is both as networks but one is much easier to understand than the other so I think it really makes a difference to have simple concepts and to explain them so any questions or comments from our audience okay so it's time to close our session again I would like to thank all of the speakers for the critical and important suggestions and comments thank you very much