 Marcus, can you tell us a little bit about the limitations of the PLI method in terms of mechanical limitations? How thin can you slice the data and how do you then you have to appeal to optical approaches? So just talk a little bit about that. Yeah, so the mechanical limitation to date is about 50 micron thickness for the large slices. I think for mice and red we could go down to 30 micron with the PLI stuff. It is important that we have a large signal as large as possible and the thicker the slice, the larger the amplitude is. So this is another technical issue. Clearly from the PLI point of view we would also like to to understand what's within one of these voxels. So we have started collaborations with people from Florence, from Lenz Institute, and they are doing optical fluorescence microscopy to photon microscopy on the same slices so that we start to learn what's even within our small voxels. And there's a lot of work going on towards this direction and it is clear that all these very high-resolution data sets there will not be acquired for the whole human brain. But we would like to understand what's what's within a section thickness of 50 micron still. In addition if you would like to to understand how the cyto architecture looks like in the brain, I mean the the big brain gives us information about the the localization and the size of the of most of the the cells that currently there's also a lot of work ongoing to understand how can we scan through the different virtual sections. So to come up also with a 3D reconstruction of the cells within each section. So there's a lot of dynamics currently in this field. So in your presentation you I'm interested by the computing challenge associated to PNI. In your presentation you focused more on one of computations like you have to do these computations only once in the data set and once they are done you know they are not needed anymore. Do you also foresee that these data these type of data sets would require computing power on a routine basis like if it would be integrated in analysis pipelines for instance or you know if people wanted to register to this data set. Yes so I had one one slide dedicated to the workflow process and I said that we put everything into HF5 like format and this will hopefully give us the possibility to to make this kind of of provenance to provide this kind of provenance information throughout the whole workflows and doing all these so in the end we will have to provide all this data to the people doing MRI and I think the the registration process itself from 3D to 3D this is something which is currently also being targeted by the human brain project and it's not not really currently in our main focus so that was the reason why I didn't focus on that but we try to take care of all these provenance information also to be able to repeat the same type of analysis after changing a single step in our workflow a single software in our workflow so that it becomes more intelligent that we say okay something in the middle changed which type of data which which modalities do we have to change or re-calculate so this is what we are looking for but we have not yet succeeded in implementing something like this. Stephen let's Stephen have a go first. I'm trying to put what we heard in the morning together with what we heard in the afternoon and it strikes me it strikes me that no one really talked about the morning's issues except maybe Nico I guess we can consider matching the modeling generational models to the data as an example of a form of replication. But no one talked about you're all representing the development of new methods and no one sort of talked about reproducibility or replication is this because you don't consider them important as one of the first steps and we heard about some simulations that there'll be something that'll be done later down the road so should we think about these as very early things that would rule a particular methodology out or a particular software implementation out because it hasn't been demonstrated I'm curious to at the intersection issue between lots of new methodologies and those issues we heard about this morning anyone wants to take that on. So I would like to say something about tractography algorithms and how they are compared and how the results are reproduced it's something that started with the people that came up with the fiber cap data set that I showed at the last couple of minutes so that has been extremely useful in the area of tractography because before that paper before the study there were so many tractography algorithms and nobody knew basically which one is better and how they and I think that helped that study helped a lot to they had a competition basically you know for any different algorithms and for a given data set right for a given particular set of connections and otherwise so I think it helps when communities basically agree to do this kind of competitions by the way this kind of competitions happen all the time in machine learning you know the deep learning community and more generally machine learning community basically they have this annual competitions of envisioning different in natural language processing and so on so I think it's not exactly related to reproducibility but it's related to figuring out what works and what doesn't work and which one is better between this large set of methods yeah I I think the reason no one's talking about reproductions is because they're so boring it's sort of nothing you just showing that what you've claimed before is actually true and of course that's really important but it's also not very inspiring right so we have to find ways of making that part of our workflow while still also keeping our focus on conceptual advances and I think competitions are really a great thing they've really boosted computer vision and this rapid breakthrough with neural networks catching on so quickly was probably only possible because of secret held-out sets that needed impossible to cheat on this and therefore very believable because even in computer vision and machine learning there's circularity going on and overfitting right and but but this is the way to really make that impossible with secret held-out sets and neural science have been a couple of attempts in that direction but it hasn't really caught on but I think that should probably be a major thing to think about creatively how we can make that part of our normal normal work and maybe have annual competitions like that with new data being added on major paradigms that are important and people competing with computational models to explain these dates so in the same sort of vein I was looking to pull together the two halves of the afternoon session more imaging versus more statistical and I'm wondering whether or not these new statistical approaches offer us something new to combine electrophysiological MEG type of data with FMRI data towards giving us better spatial resolution this is an imaging session after all we all know about the the problems of logithetus type problems of the neural data is very different from the vascular data that we draw our FMRI signal from but do nevertheless if we ignore that truth and just try it anyway might we see better spatial resolution in FMRI data being built upon incorporating MEG data with these new statistical approaches we've heard about today or is it a bridge too far and the HRF will kill you anyway well I think that's a very complicated issue I mean yeah vascular responses and electrical responses are not the same thing you know as I also pointed out in my talk there is limitations on the degree to which you can use both signals to say something about neural data and even though this is extremely common in FMRI for example most functional connectivity studies make claims about correspondence of neural activity even though the signals that they're measuring are really vascular signals so it might just be similarity and the vascular response between areas that is driving these similarities I don't know I mean maybe higher resolution data will will help something but I think we should really realize that but this is an FMRI at least that this is a vascular signal maybe the future is really there I think that there are interesting things to say about the brain just from the relationship of about about how neural activity relates to vascular responses and there are many clinical situations where this doesn't work well and where FMRI could play a really important role I would say that maybe there is hope with a new method and that one thing that also avoids this problem with vascular and the confusion of the signals is this focus that I think that really started with NEPOS work on sort of representations and information content rather than just all signals themselves but I think then by relating the information content in different signals or the representational geometry I think Nico didn't mention it today but he has a lot of work on comparing the modalities like this or people have used RSA to do it and I hope that with my method also we can do this looking at redundancy between individual time points either of a whole helmet or a single sensor and individual box or responses so I think that by focusing on this information content representation question that holds some promise. So what would we need to be able to see in vivo blubs and interblubs? Nico. I think the additional element that we need to consider is what is the purpose of all this right for me the purpose is understanding computational mechanisms and that means that ultimately we want models that perform the tasks we want models that are AI systems but that also explain neural and behavioral data and those if we take that to statistics those should be our generative models of the data right and constraining our model search to only things that actually can perform the task is an incredibly strong constraint right so then we don't necessarily need as much data from our imaging technology I think it's unrealistic to ask imaging technologies even to vote on imaging to give us all the constraints for building our functional model of the brain that's never going to work there's always going to be a reverse engineering challenge where we have to build something that actually works and then if we have you know a limited number of finite set of functional mechanisms then we can use forward models that simulate for example fmri data I mean GUEG data and they lose a lot of information but they retain enough information I think to adjudicate between you know a thousand different deep neural networks that do the task. Is there a role for structural priors big brain or PLI type data within functional and data analysis? Definitely I think we need all the constraints that we can get. I have a follow-up question on the procedure for increasing the temporal resolution of fmri because we've decolored before but how was the experiment performed so there was an example that represented a horse and so was that stimulus repeated again so was it the same stimulus that was repeated every couple of seconds or it was a new stimulus and again it was presented periodically so a person can anticipate that the stimulus is going to be presented so does that affect the results? Yeah that's a good point I mean the experiments were done with different stimuli the idea there is that we are interested in the general process of naming pictures or of language production and so the specific type of stimulus that doesn't matter in a sense it's the standard assumption in all experimental designs where you're comparing two different experimental conditions that are composed out of different stimuli but you're not interested in the real form of the individual stimuli you're just interested in the commonalities between all these stimuli and you know so it's that standard assumption I think that is not strange. But then again in another presentation we saw that there were different responses to different kind of images so if you see a face versus if you see a non-animate object so wouldn't that distort the results? Well I mean there might be small differences in the results in the response to say a horse and an animal or a car but I'm not interested in that difference I'm interested in the general similarities between saying the name of those pictures in this particular data set you know what I mean so if I were interested in saying a horse versus saying car I would have to do separate analyses for these two different types of stimuli but because I'm interested only in naming pictures in general I wouldn't do that. I think it's a standard assumption. I was curious about the models you're using and the data. I think we're close to the point where we will be shipping data somewhere to be crunched by some sort of tools and other sort of analysis and all will be like this in places where you send your algorithms out and you know that's how far in your kind of modeling aspects are you from having somewhere like a science gateway somehow where you could put those models and say and tell people hey if you have data of that kind you can try those models on today and how far are we to construct infrastructures where those models can kind of accumulate and sort of be compared like in almost like in the middle so that competition somehow isn't like real competition it's just like okay see the state of the art I'm trying a new model and I'm pushing that to be this infrastructure and so how fun we are because I don't have very much an idea of exactly what you're doing I'd like to see how far you think we are from that sort of endeavor. Yeah I'm kind of dreaming of a website where you can upload data sets brain activity data sets together with the stimuli so you're uploading essentially images because I'm studying vision so images and brain responses together and you can also upload models that process images and when you upload a new model then it's instantly evaluated with all of the data sets that are on the website in the repository and when you upload a data set and all the scores of all the computational models are updated to reflect additional evidence from that model so this would be I think a dream and how far are we toward that I think there's still a number of steps to go both in terms of agreeing with other people working in my field on the data format you know the practical issues it's certainly easier for the field of vision than for all of neuroimaging early on the national fMRI data center failed in part I think because every experiment is very different and it's very hard to find a standard that encompasses all of cognitive neuroscience I think within vision we have a better shot at that and there is a group of people who have been talking about this for a number of years and who love the idea and I think we just have to get our rack together over the next three to five years to actually build something like that. It would be very interesting to see exactly what's needed for that and also be like a focus on your field and then maybe there's an example for other fields to be like a copy-based and adapt I think that would be a fascinating enterprise. It looks to me like there are no more questions I think people have punched drunk at the end of the day so I'm going to call it a day I'm going to thank all the speakers for a very entertaining session thank you very much.