 So the two lines of discussions I'm going to paraphrase some of the topics that you guys sent me. The first one has to do with visualization and the questions are all related to each other, but I'm going to summarize them here. What are the community needs for virtual reality or augmented reality techniques for neuroscience research? What is the potential for virtual reality as scientific visualization to enhance neuroscience research knowledge and education or training and what brain network visualization limitations need to be improved upon or what problems need to be solved with respect to visualization and what kind of risks should we be aware of when simplifying network visualization. So these are mostly topics for the speakers from earlier this morning, but I believe that others of you have dealt with visualization through modeling as well. So I'll open the floor. Feel free to comment on these as I start. Hi. So basically virtual reality, sorry, and augmented reality is quite early days in terms of technology. I know it's been around since 70s, but it didn't have time to grow into a mature technology. I think there's huge potentials. We still need some more hardware to come along. The haptic devices that I was talking earlier, I think there will be an important element in there. And I think in terms of simplifying the models basically that we visualize. I think that there's the other side where as the hardware is going to improve. And we can develop new techniques of visualizing where we improve the rendering times by optimizing the way we render rather than tweaking the model itself. I know we're talking about a lot of data here that needs to be visualized, but I think with a lot of work on that side, we can preserve that data integrity and just work on the back end where we're optimizing it in that way. I think this topic actually is relatively big. It probably can hardly be discussed within a few minutes. But I do like to actually kind of use a recent example, this popular Pokemon Go example to highlight the importance to use augmented reality and maybe also virtual reality for our research. So I have a seven-year-old. And this little boy actually kind of like simply gets addicted to Pokemon Go. And he probably can play it all day long until eventually I force him to sleep. So what's actually really interesting out there is that about the game itself, from a developer perspective, because I did write some of the I show actually in my talk. I developed some of the large-scale visualization stuff myself. So from a developer perspective, I kind of feel the novel part is really about the game is that it let you be able to overlay any of these gaming components, including all the reward on top of geographical information. And then you can actually form some sort of community. I think that actually this instead of kind of like use this just for the gaming purpose, I think that this could actually also become a very useful learning tool. For example, in this kind of like a conference environment, if we actually kind of everybody actually kind of start to play that, and we could actually use the same mechanism to have a much better way to illustrate the ideas. And if we overlay some of the presentation, the brain network we care about. And then we could actually form some sort of very more vivid communication with the colleagues, and then to share ideas. And while one guy actually doing the presentation and the other guys actually could actually really get a much better idea, one executive wants this presenter means and maybe receive some feedback or this kind of thing. I think this is another form of the augmented reality, I think it could actually bring onto the table. And along this line, I think that's why actually the big company like Facebook like Google, they actually want to put a lot of kind of like investments. And I think this probably will also be the next kind of generation of the social platform. I think we from the neuroscience perspective, certainly one thing we could think about is that to understand the neuron basis about this thing. So that's obvious. But on the other hand, I feel that it's probably more important to let the community start to develop some of the useful tool for ourselves, so that try to make it much easier to communicate, especially scientific ideas, okay, to exchange the data to share our, you know, resource to make everybody more productive. It's really like the idea of walking around the brain and say, I found an axonic cell that was really rare, you know, 10,000 points. More comments on anything from Pokemon Go. Quite Pokemon Go, but just to probably add to what Esther said. So I just spoke about this tool BluePiOpt that we released to it's basically like a Python environment to model a single neuron physiology. And one thing about BluePiOpt is that it's cloud based. So you can run it on a cloud. So for example, I mean, you raised the point of hardware visualization. So one way to go is probably to run all this on a cloud, you know, so to really leverage the advantage of cloud computing perhaps. I don't know if this looked into so I'm by no means a visualization expert. So just to add to that, basically, I think Microsoft data test, they've done a test with Xbox one where actually they run a very high, very intense rendering over physics simulation from a local machine. And basically, I think it will stand forward when they run it from the cloud. So slowly, the infrastructure is coming into place, but we will be able to run these kind of elements from a cloud. So like a point of all the data sitting there collaborating and being able to be verified at the same time. And just we're just going to have a set of devices that's going to filter through all that data. So maybe if I can ask just one question to the fellow people in the panel, I definitely see the aspect that you were talking about that just by enhancing collaboration, making more fun, basically more exciting, you create creativity and you motivate people. But I was wondering that there's quite a drive to virtualization now. And in some of those cases, I can see the point, especially by, you know, bringing people closer together. In other cases, I'm not so sure. So I wanted to ask you if there was I mean, I can understand that VR is making it more fun or easier, more efficient to interact with the model. But would you say there are actually some aspects where some discoveries were made by having a VR version of the brain, rather than just accessing it the normal way or trying to analyze it the normal way? So that's a great point. Okay, let me summarize a little bit once he was asking exactly, he was asking, okay, in addition to kind of like, you know, some sort of like data sharing purpose, or maybe even entertaining kind of like, you know, aspects for the technology, okay, that would bring into the field. Okay, once this could actually really become an enabler or some sort of game changer, let us be able to kind of like generate better discovery, maybe at a lower cost, or maybe, you know, something like that. Right. So I actually definitely kind of like vote for it. Okay, so, so this become this comment become a little bit more technical. Right. So normally, the reason why actually, just like that, you know, I forgot, you probably just, you know, show in your VR thing is that okay, normally when people actually generate the 3d visualization, and then actually put into a VR environment. So that's pretty much based on that, you know, the generation of certain model, okay, the geometrical model, okay, consist of like a surface object or this kind of thing. Right. So, and so one of the limitation, okay, for that actually pretty big limitation is that in the process when people generate the model, okay, normally there will be information loss. And also, because we only visualize the model that already been generated. So that's basically anything that's related to discovery probably already been discovered, you know, already in the process of the model generation, but not through the further interaction with the model, you know, in the visualization process. So, but on the other hand, okay, so the one thing I didn't highlight in my talk is about the watch your finger is that we could actually visualize, okay, a lot more information without pre-generating such a model, okay. So actually that's a very important thing actually in this kind of like watch your finger enabled, you know, scenario is that we could actually start with the multi-dimensional density data, which is an image data in this particular case is that once you actually let the, you know, use certain kind of like, you know, camera taking the picture in multi-dimensions, okay, and even such multi-dimensional data could also kind of like from other domain, for example, like a brain network or something like that, okay. And this new technology let you be able to navigate, you know, at the unprecedented, you know, speed, okay, and the accuracy, let you be able to do a lot of kind of exploration and for the knowledge discovery and detecting the pattern, which has not been kind of like, you know, modeled, use a surface object or something like that. So I think if in the next generation of this augmented reality engine, sooner or later, it will come out of something like this particular type is that for the real, real world kind of like a picture, which has not been segmented and modeled using a surface object, how you would be able to really directly manipulate the objects, the semantic objects there directly. In addition to watch your finger, I recently I read another kind of article about some lab actually at MIT actually try to also do the similar things, okay, probably in the last year also, okay. They, for example, they just take an absolute picture and there is, for example, a car in a picture. They try to actually kind of like, you know, let people be able to directly move that car around without pre-segmenting it, okay. So that idea intrinsically kind of like similar to what I talk about, you know, in watch your finger case, okay. But we actually, you know, deal with the multi-dimensional, they only deal with the two-dimensional. But basically that's the idea, okay. Certainly that can enable a lot of new discovery because you kind of like directly do many other things. But on top of that, an application that could be kind of like, you know, generated is about this kind of three-dimensional free-form surgery, okay. I want to, I do want to mention a little bit, that potentially also have some relationship to some of the researchers here is that, for example, for the brain surgery, right. So right now, if someone has a brain tumor, okay, they use this kind of like a gamma life or something like that during this brain surgery, right. So basically, let's just focus the gamma, you know, rib, you know, beam, okay, somewhere and then knock out, you know, basically a blade, you know, such an area, okay. But normally during such a surgery process, okay, you can only kind of like burn a particular location, but you cannot actually, you know, kind of like create arbitrary shape like a surgery, okay. So we actually use this watch your finger technology actually to create something called the three-dimensional free-form surgery. We already implement such a system actually for the laser ablation experiment, actually reporting our paper, so that you can actually kind of like use the focused laser beam to cut through anywhere in the neural circuits. And then after such kind of surgery, you can actually do the simultaneous kind of like, you know, custom imaging, and then you can actually use this as a way to do a systematic kind of like, you know, perturbation of the brain's kind of like structure, and then what happens in the function, you know, domain. And I kind of like personally believe this could become a very, very powerful thing, okay, in the future to understand the brain function and lead to a lot of discovery. I see Jeff has been holding on the microphone. I just want to answer Klaus's question in a different way. That was a really interesting answer. But thinking of it from, you know, my robotics standpoint and how to discover. So, you know, we did robotics for a number of years because the virtual reality tools and augmented reality tools weren't good enough. We wanted the complexity environment. But when you, I mean, the brain is all about behavior, right, tying brain activity to drive action so you can survive in the world. So we were doing this to close the loop. Now looking at some of Marius's work and some of the others, I think now you can do that discovery. You can put good models on there, but then you can put them in different environments that they wasn't exposed to and then get discovery that way. So I think it's important that we're getting to the stage where we can close the loop between the brain, the body, and the environment. Can I just one more element to that? Sorry. One of the big differences between rendering in 3D on a flat screen and stereoscopic rendering, basically, is actually the way we see things in 3D on a flat screen is how the light bounces off those objects. So that's how the rendering, that's how we know an object is in front of another object. With stereoscopic vision, we don't have that kind of issue. We will see the object in front of others. So we will be able to select accurately not based on a shading algorithm that we write. It's basically we're going to see actually on position on the world, on that particular axis. So that becomes even more crucial in something like surgery or identifying different parts of a complicated network. And that's why I think both augmented reality and virtual reality will have a big impact once the technology matures enough. Expand the topic here with a second set of questions. And we're close to philosophy, but I think we're still grounded in science. So it's a second set of questions from the more general to the more specific. What could be considered an appropriate level of biological detail in large-scale brain model? How could large-scale detail and simplified models inform us about brain function? What are the most relevant data for understanding brain structure and function that should be systematically collected? The challenge is weeding out spurious correlations in big data analysis. And lastly, how can we make large-scale complex models stable? The brain operates over a wide range of scales, but our models take forever to tune and are often brittle. This is kind of a complementary aspect of the conversation for large-scale as well. A couple of years ago, Chris Elia Smith brought out this paper. I think it was in the Trends in Neurosciences. He called it the use and abuse of large-scale brain models. So there, Chris posed a question. What's the appropriate level of detail a model should capture? Of course, this depends on the kind of questions you want to answer. But on the other hand, if you had a framework to really try and like a scaffold framework that captures much of the structural complexity, then it would be possible to then use this scaffold, this complex framework to simplify as you please. Because then it gives you a reference. Otherwise, I don't know. In a simplified model, like how could you say if you already start off by building something that's very simple, then how do you be able to say that, oh, it's this neuron type that supports gamma oscillations. It's this neuron type that supports beta oscillations or something like that. I guess there has to be some kind of a convergence between these bottom-up detail models and these top-down models in more of a systematic fashion is what I believe. I don't know if you have to say something about this, Jeff. Yeah. The question always comes up at these roundtables. What's the proper level? And I guess the answer is always, it depends on the question you're asking. But I think we're at a stage actually getting, I've been around for a while now. And it used to be everyone said why are you making this model with so many neurons? I can do this in 10. And now it's gone the other way around. Like, why can't you make a larger model? So it's interesting how the feeling has changed. But I think now we're at the stage where we have both of those. So I think you can take large-scale models, hopefully, and extract some principles to get some more top-down models and vice versa. The top-down models can form the bottom and it depends, as I said, on what you're trying to answer, if it's the detail connectivity like Klaus is doing or if it's something that's tied to behavior in real-time like we would be doing. We have real-time constraints that have limits. And I guess while I have the mic, I was the one that brought up the tuning. And at all these stages, I don't care if it's top-down simple or very detail models, but the problems get worse detail models. Once you have something that's brain-like with recurrent structure, you have incredible instability. And it boggles my mind how the brain stays stable over a wide range of environmental constraints, both internal and external. And I don't think we have an answer for that. And I think I'm starting to get a hint from some people that do like Carl Friston and others that are doing more theoretical ideas. But it's still a very open issue that we have to deal with if we're going to go forward, because we're trying to make more and more complex things. And the brain is able to somehow have this complexity and still operate. So it's something to think about. That strikes me thinking about these two topics is that I would expect perhaps the computational modeling and simulations and the visualization to be more integrated with each other, with much more in common. And things such as measures of complexity, graph theoretic analysis, and both top-down and bottom-up approaches to the relationship between structure and function will be natural points of interface between visualization and models. And the fact that we're now running models on GPUs, and those are really driven by visualization from the game industry, et cetera, are we going to get to a point where we can, in real time, create models and visualize them vice versa, affect the models from visualization in an interactive way? And maybe that's too forward thinking. I just want to add some short comments on that. We have been kind of like thinking about how this end-to-end on-board experiment basically starts with imaging in the functional screening experiment. So basically image some area of the brain and then in the same system, we want to run some sort of real-time analysis and identify the regional interest for certain neurons as a compartmental level or something like that. And then run the simulation, and modeling the simulation there, and then predict what would actually happen. And then go back to the experiment, in the same experiment, and then change the type of stimuli, and then do some sort of iterative hypothesis generation, and then verification type of thing. And certainly this will need to use the entire process of data collection, acquisition collection, management visualization, and the simulation modeling, neural reconstruction, and so on so forth. And also quite a bit of the data analysis out there. But such a system, as far as I know, have not been successfully implemented yet. We got this kind of conceptual idea for a while. But even for ourselves, I still feel we haven't been able to really kind of put the modeling and the simulation, that's a critical step actually into the entire loop yet. Why? Because when you actually really kind of dig into the problem you actually find, many of the modelers, the actual model they work on actually is very, very different from the data directly come out of the experiment. Okay. So there are a lot of assumptions about many parameter adjusting, not actually prevent us from directly kind of like, you know, include such model in the experiment. So that's probably the major reason why this haven't been done yet, okay, as far as I know. So I guess the question was that can we put all these things together in the tools? And I think, well, this is a big, this society has been very good about sharing, right? So and there's huge open source communities outside of us. And I think that's what's driving a lot of these things. So a lot of us have talked about GPUs. That's because it was driven by the video game industry. And then you were talking about Pokemon Go and augmented reality and smartphone technology and cloud computing. So my advice is just keep watching what the trends are and leveraging what the community is doing, keep sharing. I think that we're going to get there very soon. I mean, because the tools are in place, we're developing some, but there are a lot of people developing them for a lot of other reasons. So that's what we've been trying to do is keep our eye on the pulse and not try and develop things in-house, but use as much shared stuff as possible. And then I think all of us have been very good about sharing our data and ensuring our models. And I think that's key to getting there, too. Making sure everyone have their say. Esther, do you want to have any closing thoughts? I'm not that much into visualization, but as I'm doing analysis of brain networks, I do face problems with visualization. My networks are quite big. And when I visualize in 2D, what we all tend to do is just apply a threshold and forget about those connections and just try to figure out what we can see with just 2D images. So I think the virtual reality will make a big difference in perhaps doing this sort of analysis. I always think that an ideal world would be to have the network visualization, like the idea of Google Maps, where you can see everything from a very far point where you only see the most relevant thing or the more relevant stuff for you. And then as soon as you get closer, you see these little, little, little connections that sometimes we're just applying threshold. There is a risk that we will lose this information. So I think virtual reality will make a big improvement in this sort of visualization. So probably multi-scale is a good term for both visualization and modeling. You know, Klaus has been on this for many years. I think we're closing almost in time, and I know Matthew has announcements for a slight change in schedule and two more sessions. I would like to thank all the participants of this panel and speakers for this morning.