 Hi everyone, my name is Marius Varga. I'm from Plymouth University, and I'm here today to talk to you about the virtual reality simulation of a simulated at-poll spinal cord. So The talk is going to be divided in two parts. The first part I'm just going to cover the models that we use for visualization of the In virtual reality and the second part I'm just going to talk about our visualization itself and basically about why we chose virtual reality to do this and all its benefits and how basically enhances understanding on with through interactivity and contextualize information. So our main protagonist here is the at-poll and basically is This particular specimen here is two days old. It's about five millimeters long And the reason we chose this is even though it has thousands of neurons in in the network. It has a In this particular Simulation for the model is been using only seven new neuron types. So basically it's relatively relatively simple Comparing to other creatures. So that makes it an ideal candidate for that So at this young age the at-poll has two typical behaviors One of them is swimming As a response to any touch or any kind of interaction with the skin The skin is inner rated and it's connected to the neurons and that Initial touch creates a swimming motion It's not behavior because you can't really see at this young age So what is doing the at-poll it's going to swim until it hits an obstacle or reaches the surface of the water and right at the bottom Probably around here. It has a cement gland and the at-poll is going to touch itself to the surface So it's basically it's more like a reaction. It's like a reflex and The struggling is basically is a response to predators Basically attacking the tadpole. So is responding usually to pinched movements. So Experiences were created in the lab where basically trying to grab the tadpole and it's creating a struggling behavior which actually swims backwards with a higher amplitude and lower intensity So the spinal cord this is a cross-section of the spinal cord representation so like I said earlier it has a few thousands neurons, but the neurons the way they're situated Right at the top The yellow section down here What it has is the wrong bit neurons and those are the neurons connected to the skin Nerves basically so any touch they behave like a non-off kind of switch for any touch They will trigger the initial spike and that would propagate The spike the swimming pattern throughout the the neurons that exist on the sides of the of the spinal cord Down here you can see it in this image, but all around the the walls of the spinal cord We have the other types of neuron right at the bottom. We have the motor neurons Next to the floor plate. You're gonna see it back in a slide later on so This is basically the growth model that's been used there's two different type of models One of them is the growth model that was used to generate the neural network And the second one was a functional model that we was used So what we see here is the growth model and as you can see basically it was divided It's using a 2d element to grow the modeling on the x and y values and the way that's been achieved I'm just gonna back to the previous slide. It's been using the the spinal cord as you can see the dotted line at the top It basically dissect a longitudinally dissected the spinal cord into and two halves were open as a two sheets of paper if you if you like and based on that Shape we will be able to generate a growth model for the spinal cord. So We'll get a neurons growing and That the growth model basically it's is simulating the growth itself according to chemical gradients. So We use optimized gradients were optimized mass statistics of real actions with data that came from Bristol University they're doing a lot of research on tadpoles and The model itself created a connections between the the neurons where synapses were were added Some when some synapses met the dendrites basically We're creating a connection so in total there about 80,000 synapses in the model and about 1400 neurons Simulating the model basically this is the functional model we're looking at and It's been it's been using the hot skin huxley type neurons to to generate connectivity and the membrane parameters but tuned to match the electrophysiological data and I don't know if you can see on here on the graph on the on the right Right, this is where we the swing pattern exists basically is being shown as an outcome So there's a yellow dot that yellow dot basically is the initial touch on wrong being neurons. They're being treated and That touch generates the swing pattern. You can see it kind of got into a symmetry and All the green green dots on the Neurons basically that are being fired on both sides of the tadpole. So Basically using this data We we move to the visualization model and I'm going to talk how how those two models were being integrated in a visualization so Using those two models There was a lot of data being generated for for the models First of all, we had to grow the the neural network and to visualize in the second We had to create the spiking system so we can So we can show that so Obviously for that we chose a VR visualization because the sheer amount of data that in there is very difficult to interpret and NC So the visualization what we've done we recreated the spinal cord in three dimension As I said earlier because we had those two sheets of paper that The metaphor basically for two sheets of paper where the Accents were grown What we've done we put those two together by just bending the sheets of paper and creating again the spinal cord so we recreated the shape that we had originally and There was a random variation added to the data. So we get a bit of a thickness on the wall So it's going to be a bit more accurate representation of the real real actions We have seven types of neurons the older neurons they're being used they're being Visualized in there obviously their color coded to match the the initial data Somas are represented by cubes and basically the axons are represented by lines. So we chose This kind of representation to keep to the minimum the Strain on the GPU we try to to be as efficient as possible because it's an interactive Element so we need to stick to a certain frame rate There's no damn rights in this visualization, although we have the data for it But we're planning to introduce them in the future. So This is actually a mid image from visualization itself. This is what's being created Probably in here at the top you can see their own big neurons they allot at neurons basically that the initial trigger for the swimming pattern and Down here at the bottom This is the floor plate and on green colored on green here. You will see the motor neurons Themselves basically they're creating the alternative You might see these blue streaks of lights of color basically what they are those are the spiking neurons being fired Those are commissural neurons firing and going across the spinal cord from one side to the other to create the Symmetry in swimming If you try this in VR Basically, you will be able to get really close to any type of neuron and just observe the motion Probably if you use this I would suggest to see somewhere right in the middle of this and you will see the swing Titans go alternatively on both sides So the fact that you can choose your point of view gives you a bit more clarity for for looking at the data So I'm just going to cover a bit of technology what type of technology we use As a rendering element we use unity 3d Unity is basically a game engine in itself but it's being used for serious simulations and The reason we chose this is very versatile in deploying to different platforms We at the moment we're doing only on Windows But it's it's going to be very easy to deploy to Mac or other platforms for people to to look at this type of data And basically the second Important device that we use with the visualization is Oculus Rift, which is the stereoscopic device and I'm just going to talk about a virtual reality a bit and how this device it's placed in a virtual reality So virtual reality is basically a medium that simulates Physical presence in an imaginary environment or a real setting basically using technology So in a nutshell we use technology to in order to enhance our presence in that 3d world The main the main device here is the headset like I said earlier It's it's very important to create that sense of immersion by by using the stereoscopic vision There are other devices out there This is one of Pretty useful for us because it's very accessible The code is very accessible and what it has probably you can see down here. It has a Motion sensor, sorry depth sensor that reads the headset and basically translates all that information into the virtual environment So that's very important because it does not accurate enough or if it's slowing down the system You will get motion sickness or something that we try to avoid We use 3d sounds in the environment so we get more sense of emotion to people So we have different cues that are being triggered like buttons and different visual audio cues besides the visual cues So we we use the head tracking like we said for for rift and at the moment We're not using any haptic devices in our visualization It's it's possible in the future. There's a few bits of kit coming on to the market But at the moment we're not using any of that. I just will I would like to talk about the units and scale at the moment Just just for a second. So the initial models the parameters coming out are For one unit basically is one micron and for a tick in the simulation of firing axons Basically a millisecond So the way we translated that in unity. I've just put a comparison there. So you get an idea Usually unity what it does is using one unit in unity as one meter for a physics simulation purposes So they create more accurate physics. So what we've done we We match one unit in unity to one micrometer So basically one micron from coming from the visualization from the simulation into Interunity and that gives us basically a huge spinal cord of a tadpole which in real world That would they'll come to like 1.5 kilometers. So basically will fly in an environment where you in this huge tadpole Is not scary. I promise you so What we try to the reason we use VR is because of the immersive element One of the best way of investing any any kind of element is as we know even the kids they pick objects They look at them they touch them Obviously, we didn't do that with haptic devices But the way that we use VR is to to immerse the player So they can choose their own point of view in that particular world so they can look at elements from their own point of view and Feel like the presence The present in there. So we use that to maximize that effect So scale like I said earlier is very important because it's quite big allows you to get really close to any kind of connections or any neural interactivity We we can create this Exploratory system where you can go anywhere and you can look at anything from any points of view And I've covered the sounds already But what we have an additional element is the contextualized information Basically, this is knowledge from the expert being given to the user of the system Whoever it is We do have some control of the over the timescale basically in the in the firing of the neural visualization We could have full control, but we didn't implement that if this is more like People getting engaged with the system rather than controlling it for for minute differences In order to to help user Explore this environment a bit more we created a starting scene called the pond scene basically where we're trying to Contextualize this information that they're gonna receive. So the user starts in the pond scene in It's surrounding peaceful environment nature like And there's a tadpole and as we click on the tadpole, we basically we dive in inside the tadpole and we start to see all these elements The reason we've done that for people who never use this kind of technology via There's a bit of a shock to just going straight and assimilating a lot of new information and just dealing with the tech at the same time So what we try to do create that initial start just using the the tech first and then getting Dishing out the information This is a screenshot of the of the pond scene As you can see in here as well, we played a bit with the scale You actually like probably a third the size of a human. So you're very close to the water and close to the tadpole Okay, so we did have some challenges with this wasn't easy Basically the aim was to to hit 75 frames per second in VR And considering is being rendered to both eyes that basically we literally we had to run at 150 FPS The reason we do that is the refresh rate for the VR headset for rift is basically 75 parts And we had we tried to match that in order to to eliminate any motion sickness So we pushing around half a million quads in the in the rendering world So we had to do some some stuff unity as out of the box Doesn't deal very well with all these elements. So the matches weren't optimized. The shaders were not properly rendering So what the frame rate was dropping? We had to create chunking system and animation system specifically for these type of data So we can deal with this so we can increase the frame rate I'm just gonna quickly talk to a Scenario of how we basically we created a mesh just for for this system. So the initial What we've done we we got values for the accent basically which came at 2d points in the world So we preserved that information and we create like a 3d rendering data set of data from the initial data and this This red dot black dots basically that's the information that came from the original data so what we've done we create a couple of vertices on each side of it and We try to generate that the simplest shape in 3d, which is a triangle. So connecting those two dots with a line it was basically putting two triangles together which Created a quad in to be fair. So Once we created that connection between two points. We had a lot of lines in there a lot of values. So we had to Connect all these dots we could We could make everything more efficient basically we could interpolate between some lines But we wanted to preserve the original data. So we didn't touch that So what we decided to do we decided to join all the lines into a mesh We made it a bit more efficient the only problem the column system It wasn't as efficient as it was supposed to be Having these long meshes which were going longitudinally So we had to create a chunking system where we cut the meshes into the right size In order to improve the culling. So It was nearly there quite working the only element that we still had was issue with the rendering The shaders were weren't optimized. So we had to write some custom shade that make the The rendering more efficient. So we made it double-sided so you can see it from both sides and in the end we we achieve our Framerate basically 75 Okay, so future direction basically this this kind of System can be used for scientific Visualization in VR giving you access to the data a very close range Allowing you to basically interact with that particular data We don't have it for this particular demo that we have here this week But we we have a version where we can interact with any any Neuron or accent click on it and get information about that Specific to this model we can create basically a master system physics based and we can generate swimming based on the neuron input And get the masses to contract and relax based on the information Again early days in there So in conclusion Basically, I hope I covered most of the elements about a VR in the the models that we use the data from but if you have any questions feel free to ask and Just one slide. I would like to say thank a lot of other people But these are the main people that helped with with the project and most of the work was based on them So, thank you very much Okay, so we