 So good afternoon everybody. So I'm Marwan Abdella. And I'm here today to talk a little bit about probably neuroscience. Just you know a little bit of it. I won't really go that far. I'll just try to focus on the computer graphics and visualization aspects and to demonstrate how did we use Blender to build some add-ons that have been used basically in neuroscientific applications. Okay, so I'm Marwan Abdella and I'm a biomedical engineer by training, graduated from Cairo University, and I got my PhD in neuroscience from EPFL Blubrim project. And yes, that's that's me 10 years ago. Well that's yeah and that's me right now. So lost a bit of hair but gained some weight so it's okay. I actually work now as a research engineer and system specialist at the Blubrim project visualization expert. And to know a bit more about the Blubrim project it's a project Swiss project that aims to digitally reconstruct 3D models of the brain and then integrate these models together to perform a simulation of the mouse brain. So that's that's an example of a model of a three-dimensional model for the red brain and what we want to do is actually to get deeper into it and understand every teeny tiny little aspect of it. So if we would like to for example visualize what we are doing so this is a mouse brain simulation. That whole thing has been completely digitally reconstructed and simulated in a supercomputer. So every individual object in this slice is a neuron and when we perform the simulation we expect to see some some results. So we have to make sure that these morphologies of the neurons the shapes are correct because morphology impacts the the function. So why didn't we just like start with the human brain some people might ask because basically the human brain is very very very complicated. It has around 85 86 billion neurons and it has more like 85 billion non-neuronal cells as well and it has 650 kilometers of blood vessels. This is very complicated yes. So why don't we just start off with something like you know has similar structure but smaller in size. So we go to the mouse brain it has around 80 million neurons. A similar figure for the non-neuronal cells at just like a small piece of vasculature network around 300 meters. So what's happening in reality that we image the the mouse brain using different microscopic techniques like optical microscopes or electron microscopes and then we end up reconstructing models that way. This takes us to what's NGV the neural layer vasculature ensemble. So we get this EM microscopic stack and then we start to scan it and then we segment the different objects until we end up building these models. It's very complicated actually in reality just do this could take up to several months so we build these models in a supercomputer. If we just remove that mess because they're like different types of cells we just need to focus here on neurons, astrocytes and glia. And whatever that means so I assume people might know what neuron is. The blood vessels basically you have all the blood going through them and then the astrocyte is sort of like the cell that's giving the energy from the blood vessel to the neuron. So that's about the biology we don't need to know anything anymore. This is how it looks actually in reality and this is how we are going to start it in the erosion. So we have a model here for the astrocyte we synthesize them we have a model for the neuron and then we have a network of blood vessels. So we would like to visualize the different models as we are giving them from the lab or the ones even that we digitally build them using some tools. Trying to do that using different visualization tools might not be okay that easy. So what we have built here two add-ons based completely on Blender that would allow the scientists to perform all these different tasks morphology visualization morphology analysis visual analytics editing repair even what's called somatic reconstruction get to that later and reconstructing some meshes for simulation and different kinds of other tasks. So let's start off with neurons. So this is basically a 3D slice of a neuron as it's reconstructed by optical microscope. This is what we get from the lab. Okay, it seems nice but it has none it has no sense. So we have to build the tools that would allow us to visualize it in every different detail. So we implemented in Neuromorpho this different sets of visualization modes that we can actually visualize these neurons and this is how we do it. So we have the soma here it's called the cell body and then we have just some samples of vertices representing the arborization of the neuron. This is one visualization mode. If we connect the samples together we end up having segments and if we connect the segments between two branches we end up having sections and then as you can see here sections might have some edges or like gaps between them. So we actually do something called articulated section so we integrate spheres within the between the different branches and then finally we connect all the pieces together to end up having the what's called the arbor representation. So we can visualize the neuron completely as we would like to see it in 3D so people or like neuroscientists can understand what what kind of value what kind of information does it have. Okay, in fact this is just a simplified view this is what we can get in the lab you know. So neurons are characterized by very very very low occupancy and extremely huge extent so if we would like to visualize an entire neuron that way it's not easy. So we have added several utilities actually to change the different parameters of the neuron so we can visualize and then we can analyze. So here for example we change the the RDI of the neuron at different values so if we would like to visualize everything okay we can see it but if we'd like to focus on certain thing we can load the the default view and then we can analyze the way the neuron the way we want. Okay, also neurons are characterized or they have different types of arbor called axons, apical dendrites or basal dendrites. So every type of arbor here or branch is color coded by certain color so if we would like for example to focus on one certain type of branch neurons can and neuroscience can basically just tick or untick the the specific type of branch you would like to visualize or they can even change the what's we call the branching order so we'd like to visualize the neuron up to certain branching order because this is important for us this is not important even certain cases axons are not really needed we would like to focus on dendrites so okay we have the option to do it that way and also we have added other utilities in order to clean sort of the neuron so this is what we can get in the lab and you can see there are like some bumpy artifacts in the neuron so we can actually filter them what's called the tapered moods so you don't have in the middle you don't have any sort of like bumpy artifacts and moreover we can synthetically add these tapered artifacts and make sorry these are bumpy artifacts and make some sort of zigzag structures and someone might ask okay what kind of thing we use this this visualization mode for it's like no it's very important because basically this is what you get in the lab in vitro okay and this is what we simulate in the computer so what we do with this is to build machine learning networks in order to automatically segment these models from the optical microscope is tech so we add some noise actually in the rendering so we can get into this view that has a similar you know a structural characteristics so we can build our machine learning networks based on them okay so this is just for visualization what about the analysis we have used the blender user interface and integrated a little module in our add-on so once you actually load the neuron you can analyze it completely and use the interface to display all the morphometric characteristics of the neuron and moreover you can actually export the results into you know metal clip figures and in sync you can use the same color pattern and visualize then you're in the same way so you have a complete structure being analyzed and then you have one PDF or one file that contains all the results like all in one go so this is something that neuroscience would really appreciate do you want to see the structure and meanwhile up you know want to see the results the analysis so they can match or can visually match what they can see okay the next thing is a soma reconstruction as we said so my is the cell body by default in most of the neuroscientific application that would visualize nearest that way they use what they call like an approximation symbolic approximation either by sphere to represent the soma or by cylinders but this is not reality because it's really hard to reconstruct three-dimension profiles of the soma so this is how a neuron is loaded in fact and simply we can see the soma is only represented by sphere but this is not correct so what we do we just simply ignore the sphere and we get the initial points of all the arborization or the trees and then we use blender to reconstruct a faithful representation or approximate representation let's put this way for example using metabolds okay it's nice it's starting to get more realistic however if we use soft body objects and physics simulation we end up having something like that which is much more realistic than just having a sphere and this is soft body physics simulation to reconstruct a real 3d profile even without having any kind of data that represent the soma from the lab so this is physics simulation and we have actually generated multiple the somata for like the cell bodies for multiple types of neurons and you're all seem realistic and we have published several papers actually using these modules to demonstrate you know there is us we'd like to show with that okay we have morphologies but we need meshes for several reasons meshes can basically be used for visual analytics for visualizing simulations or for simulations as well so this is what we get from the lab okay but this is what we need so in order to reconstruct the meshes we have actually used different types of mesh reconstruction techniques in blender so we can build the meshes that would perfectly fit our needs this is the first method it's like basically polyline meshing or spline meshing so what we do is take all the arbors and then convert the desplines into meshes and voila we have some mesh and here just the the soma is generated using metabolds it's a very efficient has low distillation optimum for large-scale visualization perfect but meshes are not without height and this is quite understandable of course because here you can clearly see that the intersection between the different branches in the arbors are there so it cannot be also used for transparency well it can be used for visualizing huge slice that has thousands or hundreds of thousands of neurons but not for like one model visualization of transparency we have then used union operators it's pretty much easy to get all the arbors and then apply union operators between the different objects in the mesh and we end up having something nice like this so it's less efficient due to the application of the union operator union with the fire and the branching geometry okay it's better than the nothing but it's quite limited and that's why that we got pushed to move to the next technique which is using the skinning with the fires skinning are perfect they have optimum topology and they can be even used for transparency because they make very nice branching geometry however branching might fail in certain complex topologies like this so I end up having like open geometry so it's not it might not be good but still it's very very nice and it gives results 99% okay but in certain cases they might fail and then we have used metabol meshing as well okay we can see that it's it can give you one single manifold but it might not be used for for simulation because the resulting mesh might not be watertight until recently voxelization based meshing has been integrated in blender so we can manage to use the voxelization based remeshing to generate a watertight mesh that can be used for simulations it has high tessellation it has high tessellation but it's still okay perfect for us okay for synaptics we would like to visualize the connections between one neuron and its connecting neurons so in order to use like spheres aqueous spheres or uv spheres to represent this huge number of connections it's really tough so what we have used particle system or point clouds and it's just in one go we can create this rendering or like this scene in just few milliseconds so this is also one advantage of using the the particle system here okay what about the astrocytes astrocytes are pretty much similar to neurons except that they have something called n-feed and n-feed basically is the the object that traps around the blood vessels to give the energy from the blood vessels to the to the neurons we have integrated several visual analytics tools so we can use flat shaders for example and then apply that the different visualization kernels so we can analyze them within blender so it's very easy then to get into different kinds of information the neuroscientists would like to see and also we have implemented the same mission techniques we have discussed before even the watertight ones can be used to perform reaction diffusion simulation and then we can visualize the simulations in blender and we can use the models to verify and validate the the validity actually of how the astrocytes are wrapped around the the blood vessels the last thing is the vasculature which is basically a different add-on because the neurons are different from blood vessels so blood vessels actually they have pretty much similar structure however they have acyclic graphs unlike neurons they have acyclic ones and the main difference is like blood vessels they might you know have they might be quite small or they can go up to an entire mouse brain so when you check here like the statistics you can see that we can go from just 600 samples to 2 million samples so it's a different problem so what we did is we packed everything in one single object and then we visualized it entirely on one single go so that actually makes it easier for us to make this operation sort of in real time we also applied the same analysis for the blood vessels however some neuroscience requests requested the feature like okay we need to apply the figure out the segment as the alignment of the segments in XYZ coordinates so we have easily integrated just one kernel and the kernel got automatically applied to be able to visualize the alignment in every direction based on color coding or color mapping the the the segment according to its direction we also implemented resampling filters so when we reconstruct some vasclature due to the scrutinization artifacts we end up having dismiss so when we try to do the visual analysis it's not that easy so we need to clean the data set so we have also implemented some resampling filter and then we visualize the result immediately so we can clearly see the difference that having here a smooth surface that even when we try to do the meshing it's much easier because without having a clean sampling pattern the meshes might be overlapping and they end up having completeness so we have actually we have also used this morphovis to perform the visualize simulations so in this diffusing color mapping some scientists would like to visualize how the diameters or the radii of the blood vessels are changing with respect to time in order to be able to validate their their modeling algorithms so actually this is some sort of simulation that makes it easier than using a color map to perform this this action so we can also visualize dynamic data sets and the last thing of course meshing as well we can generate vasclature meshes using the same techniques we have used for the neurons however here if we can see metables for example might not be that good because they still have these bulby structures so we have used in the image below here skinning based modifiers and it gives us like very very nice results in summary just to ensure that we all got it we have used different tools and options in blender in order to build two addons for neuroscientists that i would say as a visualization engineer it's really hard to build a pipeline that can have modeling that can have analysis that can have rendering because here we have like we use different kinds of shading we use workbench render we use evie we use cycles it's just like science have to choose with what kind of shader they'd like to have and to put the entire pipeline in a very short time to end up with a product like that without blender it would have been completely completely hard so with that i'd like to thank ron and the blender foundation for giving us such such piece of art that we have used in our scientific pipeline in order to do some task that was hard that would be hard without blender and this is actually a video demonstrating how do we use blender here to do everything from just like visualizing morphologies rendering them getting like loading spines visualizing simulations creating 360s creating progressive renders so yeah that's it and finally i'd like to thank my colleagues who helped me a lot actually for the development and for testing and ideas and tons of other things to end up with these two products thank you so this is my email so the two addons are gpl so they are available on github there are like two week is that have detailed documentation and here is my email so feel free to contact me if you need to know more about the tools thank you very much