 Hello everybody. Great to see you all here. I am Siebren. I work at the Blender Institute as a Blender developer. Before this I did my PhD in crowd stimulation, crowd animation at Utrecht University. And as such I was asked by Tom to be the chair for this panel. We're talking about scientific visualization with Blender. So we have five great speakers here. We're going to talk about what they are doing, what the research is, how they use Blender. And afterwards I would love to have a little bit of a discussion about Blender in particular or open source in general in the academic world. What does your university do with it? How do you think it could benefit? Or how maybe it's rubbish for academia? I don't know. It's always good to fight it back and forth a little bit. So let me introduce in order I think of database ID on the website. We have Paul Mattis who is working as a visualization group leader at SurfSara here in Amsterdam. We have Adam Kallis who is a computer scientist at Friedrich Alexander University Erlangen-Nürnberg. We have Marwan Abdella who works at the Blue Brain Project in APFL. Mike Simpson research software engineer at Newcastle University. And then there is Peter Strakos researcher and developer at IT for Innovations. So thank you very much gentlemen and thanks for being here and I will give the word to Marwan. Thank you. Can you hear me? Thank you very much. I'm very excited, motivated and very happy to be here actually at my first Blender conference and I'm here to show and share my experience with using Blender to see neurons and how just a spark of an idea has turned into I wouldn't say a complete product but at least it's like an add-on that I can say like many neuroscientists now started using it into developing their images that they need for scientific publications or to analyze even neurons or for artists to generate mishes out of it. So I can just start with this one scientific visualization of brain like 100 years ago was something like exactly like this. So before even way before the time of open sourcing this guy was just using his microscope to see the individual the shapes of individual neurons using certain staining technique called golgi staining. So this is how we used to see the brain hundreds ago or even more. Later like a decade a century later or even more we have had a spectrum of different imaging technologies which would allow us actually to see way more detailed objects inside the brain. So that simple thing this is the the brain of a rat two weeks old rat so this simple three-dimensional object that is almost the size of the palm of the hand has turned out to be a galaxy. We can see way more than just you know something with the size of the palm of the hand and if we just use like some sort of optical microscope let's say the bright field microscope we can do some experiments in the lab with which we can reconstruct three-dimensional uh I wouldn't say models but more like samples points and then if we just try to see to segment this stack we can end up with something like this and this is basically a shape of like a rough skeleton of a neuron. From then we define a morphology a neuronal morphology with which we can identify the soma or the cell body the thing in in yellow and the different points along the morphology and if we just connect these points we call them samples we end up having sections and if we connect segments sorry and if you connect the segments we end up having sections and connecting the sections together we can identify or define at least the the the arborization of the different neurons. So we have many many different shapes or many many different morphologies of these neurons in our brains and they are extremely complicated ugly arborized like very long having like huge spatial extent with while they are very thin so trying to really visualize them it's not just a matter of like making like a three-dimensional plot or even a 2D plot on paper because at the end you might see nothing. So what you have to do is to develop a technology or a way like a novel method where you can really allow the scientists to have a visual appeal to see exactly what these morphologies are about and these are for example like two common methods that people might use to visualize morphologies using even the WebGL where they can just label and do some some coloring like this to identify the different arborization or the different arbors but it's still it's it's something very basic so the approach we have fallen the blue brain is like okay what about using blender so let's just give it a try and from then it was the spark the idea of building neuro morphovis which has turned into a like a complete plugin as a tool I won't say just a tool maybe a framework because it's combined of several toolboxes together as I'm going to show later and I just wanted to use this sorry in the beginning because my advisor was just telling me how can we reconstruct realistic shapes or like plausible some matter with plausible shapes so I only wanted to use blender just to give it a try as an open source tool easy to make a prototype out of but then it had I had I have integrated these other toolboxes that we can use neuro morphovis to do morphology visualization visual analytics we can even use blender to repair and edit the morphology skeletons and we can use the different mission uh APIs to in order to reconstruct uh measures out of them and finally we can use the tool to produce media out of them in an automatic way so this is just the architecture of the underlying architecture of neuro morphovis but I'm not going to really go into any details here this is just a very simple interface and we wanted to make the experience kind of seamless that if a user doesn't know how to use blender he can still use the plugin very easily so we designed on this uh left side the the different panels which would allow the user just you know with just simple clicks to visualize and to see the different labels of the morphologies and to pick colors and even to do different reconstruction methods apply different reconstruction methods or try to change the quality of the neuron or try to change radii and so on so forth and these are for example different methods where we can reconstruct the neuron morphology and just analyze them quickly and we can also control the geometry trying to change the radii in order to see something on the complete the global scope or just something that is very specific to certain block we can also try to control the branching order with just very simple uh like clicks the user can do a lot of a lot of stuff and we can select certain branches to visualize which would which would change the shape of the somata and also we can use different methods to sketch the morphologies for example the zigzag mode here we can generate meshes that we can use to generate volumes which would give us something as we can see it exactly under the microscope and also i have added a simple analysis toolbox which was one single click a user can analyze entirely the different aspects of the morphology and it's not just on a global scope but even for different arbors individually as you can see here and also we can use the editing capabilities in neuron which we can switch the edit mode and then if there's like an issue with some sample i can just select it and then we give the user the the ability to just move navigate and then you can just repair one single sample that was mistaken to to make the the correct shape of the morphology and also the somata many people just use spheres which like something very basic so we wanted to give the user some more you know realistic plausible shape so we have used the soft body physics and the physics engine and the mass spring models in blender to be able to reconstruct realistic somata that way and so also we give the user the the the fixability to change parameters and see the different shape of the somata for one single neuron so this is the different shapes for 55 different neurons we just know how do they start from one thing one single sphere with different sizes and we end up having different shapes then we use also different mission methods to be able to reconstruct convert this into this like reconstruct highly plausible and realistic missions out of the samples we are getting from the reconstructions we have used the boolean operators mainly the union operator and the skinning modifiers and also we have used the metables to reconstruct different missions so i'm not just i'm quickly i'm just going to show how can we convert you know one simple morphology into a very nice mesh at the end the different steps so using different toolboxes from blender bridging bridging and then we just smooth and we apply the shader and end up having nice mesh like this and also like different kind of materials we have integrated into the the plugin with which we can use to automatically generate these nice renderings and then we have used blender to generate highly realistic sorry artistic images for scientific publications so that was one of our colleagues the neurons they do look very nice with just few clicks that we can get from the morphologies and these are some renderings that we have just generated from from the tool as well so just to make sure that on time the tool has been open sourced and it's available on github and it has been tested on the the different platforms which was working fine and this is just one email i would like to share that i've received from some guy who's thanking me for development this tool but actually this is a something that i have to pay it forward to the blender community because i would say without the without blender this thing would have would have never been there and using blender to do something like like this was completely the API was very well documented i have managed to use it in a way that would would very simply to to to really build this thing in just only few months and i've got like many users now who like you know keep saying that okay we are very happy to use blender and to use this tool in in our work so thank you very much other yes just a simple question i saw this was the 2.79 yes that actually because we have developed this pretty much i started doing this like a year and a half ago before 2.8 was announced you're gonna but now we are we're i'm actually working on an update or for this with this 2.8 but i can still say that it's the performance wise and sort of features is still kind of complete so just the move is to be you know updated that's it thank you i didn't really understand if you kind of do that in-game real time to to connect them to do this bridging of yeah bridging of these yeah this is completely automated i just use the the blender api where i can find where the branches would intersect with the reconstructed soma and then i just label the the faces and i use the edge loops where i can just bridge you know the face one face on the side of the soma one face on the side of the arbor and then the the turn something like this and then i just use one smoothing step so it turns into a complete you know one single object out of it more smooth more curvy do you run into problems with the smoothing because i always notice this behavior in blender of course when i smoothed it that things get incredibly thin but it depends on the algorithm sometimes if you use a scanning modifier for example yes you might end up to something like this but if you use something like like the metables it will never happen oh yeah so it depends on which that's why we have implemented different mission techniques with which to give if you have something that is really very bendy sort of shape you can still get something that is not intersecting or anything like this just another question the segmentation itself is that something that's also part of the no no no no no we we just get skeletons that are segmented by because that's a huge probably neuroscience it's not just a problem to be solved that way so that's we got just the skeletons in standard file formats meaning what's called is wc file format and then we turn it that way more questions if not then let's switch over to Peter okay hello everyone once again and i have been already introduced so i would go directly to the topic i would like to present here so at our facility we have focused on scientific data visualizations using blender and kovais so just briefly go through what is scientific visualization well usually it's kind of scientific phenomena you usually compute on large systems and after the computation of such simulation you want to somehow visualize the results what you can see can be considered i mean the images as a result of some kind of scientific data and there is quite a lot of tools available already for this kind of visualizations namely all of you probably no per view or other stuff like visit vmd paper and or also kovais and this tool we decided to choose because it has quite a lot of important stuff we focused on because we work in a supercomputing center so just briefly introduce you to the kovais so you can have an idea what is it about and how we decided to use it so kovais stand for a collaborative visualization and simulation environment so basically right now it's already released and as an open source so this is quite important fact for us and it's it has started its development in early 80s and it offers users quite a lot of functionalities as i said you can do simulations post processing of the data and also visualizations but we decided to replace the visualization possibilities of kovais itself by blender and as for inner structure of kovais so we can better have better idea how it works and what are the similarities to blender so it can be usefully used within the blender so it has some kind of user interface where you can operate with lots of modules and those modules serve as basically tools that process the data you can load the data do some processing of the data and also the visualization but that's the part we decided to left for blender and what is also very important for us is that as for the computation which is done with those modules it can be spread on not just the local machines but also on remote machines so this is quite convenient for a supercomputer and of course there is or there are mechanisms that then share the data from the remote workstations so you can basically control the modules so you can set them up and then share the data from each individual module and it doesn't matter whether it's running locally or remotely so that was for kovais and we decided to join the forces of kovais and blender on one side blender as a and 3d studio and kovais as a data analyzer and we have started to create kind of new editor in blender we call it kovais editor and it shares the same kind of workflow as is in kovais so it means it's a network of modules or in blender it's nodes or are nodes so it's network of connected nodes and that process the data up to the visualization and yeah we try to seamlessly integrate this new editor into the blender and it's or the status of our work is that it's part of the blender as a modified version of blender and it's already in version 2.8 and here is how it looks in the blender environment so basically you have a new kind of editor the kovais editor in a selection of editor types and yeah here is kind of structure you can get using this kind of editor basically you have set of nodes which you interconnect between each other on one end you have some IO module that reads the data from some some computations over there it's a node for reading and site data and then you can process the data to the goal you want to reach up to the visualization or up to the last block which turns the data into geometry in the blender and here is a short video of how it can work so here you can see an example it's a Tatra car the old one and yeah here is the structure of nodes which basically loads the computed data and it's from a cfd simulation so you can for example visualize the streamlines from the computed data and you can mix with the functionality of blender which is already there and basically use some start and end node for the streamlines and move the points so the streamlines can be moved and precisely cover the car as they were computed before you can also switch to different type of analysis right now on the car there is a visualization of pressure distribution and you can basically do from their animations you can you can push it further and create some nicely looking visualizations using cycles from blender and so on and so on yeah so basically that was it what i would i wanted to show you and it's still work in progress so thank you and if you have any questions just ask and here are some more examples from from our plugin or from our vice editor in blender thank you hi that's that's really interesting but where do you obtain data like the car on the bottom left as i said we work in the national supercomputing center and we have lots of colleagues who does computations cfd analysis structural analysis and basically they are our source of data and basically or hopefully they are also our users users of such tool so they can better visualize or prepare some stuff for presentations of the computing results so the data comes mainly from our colleagues who does cfd and other other analysis it's just it's more the the way the the air moves or it's the aerodynamics which are computed but i think the car itself you can load in any kind of car or any kind of model and and put your scientific data in and see it the model is not computed it's the the waves which are computed yeah but the waves are computed based on the model which is in there yes exactly yeah any model you can use any model and you see well yeah but you have to first compute it you have to compute model first yeah okay so yeah yeah yeah so they compute basically everything and then they want to visualize the results and this is possibly to how they can do it so is the yeah all the stuff i yeah this is my colleague hello i am the second guy on the first page uh the uh the data come from openform and it was calculated on super computer it took very very long time few hours on several notes and this is the visualization which comes from the cluster the the data the result data uh are very big like to several hundred gigabytes or terabytes and we are able these uh data show directly in the blender that's all hi hello very interesting i'm just wondering why did you need a custom build of blender for this can you in theory create it as a plugin which will uh easy yeah it would be much easier for users i agree but because we need to create special data types and so on so as a first let's say trial it was easier for us because we have c++ developers in our side it was easier to create special build of blender yeah and did you have plan to transform it into a more conventional me personally i would prefer it because i'm more onto that side of developing add-ons but we will see what will be let's say reaction to this whether more of people would be interested of having such a tool so we might decide to put more gears on that and maybe turn it to separate add-on you can just download and and use it in blender yeah okay thank you very much maybe on this topic you may also want to be in the discussion tomorrow about add-on development okay because i think one of the main reasons why people choose to do this in in c++ or c and not in python is because of the speed and when you say you have to go through a terabyte of data it's yeah so maybe there should be some plug plug-in structure for blender to allow fast data transfer one comment i like blender because it uses the python and c++ and i don't know who knows the s pointer in python no nobody it's it's great great great because each object in python has the s pointer which is the pointer into memory of c++ structures or respect of c structures it's very very useful thanks uh not right now repeat the question yeah uh the question was whether our plugin supports dynamic data uh well not right now uh basically we so far we were working right just uh one time uh shot of of data yeah can you shout your question or it's gonna be a little bit rude considering i'm at a blender conference but have you considered other software like for example gudini for this task since gudini has a lot more visualization options and it has a node structure which allows more freedom in terms of how you want and what you want to visualize uh isn't gudini paid software uh gudini has python support gudini has python support well we simply wanted to use totally open source uh kind of stuff because we are national organization and we are at the university so that's why thank you very much so now up is mike simpson so we have to do the last switch of today hi so as i said my name is mike simpson and i am a research software engineer from new castle university um the team works with various partners across the entire university but my main job is to work with nick hollerman who is the professor of visualization at new castle and so i'm working on a number of projects with him focused around data visualization obviously and blender is one of the main tools that we've been using for that uh so what i thought i would do today is just show you three examples of some of the visualizations that we're currently working on using blender and yeah just give you an idea of some of the some of those projects but first i'm going to tell you a little bit about the background for the project so we work with a group called the observatory at new castle and they've got access to hundreds of sensors all across the city measuring things like temperature rainfall and air quality and that sort of thing and we've been working with them for a while we've got this 3d model of the city that we purchased from our colleagues at northumbria university and what we're trying to do is display the urban observatory data in a sort of interesting and engaging way and we did some initial visualizations like this one where we've just taken the 3d model treated it as a map and overlaid the um sensor data on top of it um so that's the sort of basic example of the of one of the earliest projects that we did in blender um you'll have an idea that this is a slide from a slightly long elongated presentation but we basically do as much as we can in an automated way most of it is done with python scripts we haven't got to the point of writing plugins yet but that is something that we're working on further down the line um it's running these scripts to process the data that we've downloaded from the urban observatory and generate these visualizations automatically so first thing i want to talk about was a video called living cities that we made which is now published on youtube i think it's certainly on youtube but anyways made public yet um and what we were trying to do was to use data from the urban observatory to tell um data stories in an informative and engaging way and we're looking at in this case how the um when they run the pride march in Newcastle every year they close down quite a lot of the roads and the city centre and what effect does that have on um air quality and obviously when you get rid of all the cars the air quality dramatically improves but we wanted to try and show that in a slightly more interesting way so i've got i can find it video here so this has been done entirely in blender some of it is animated manually and some of it is generated um automatically based on the data source so we start off by sort of zooming in we zoom into the uk i won't show you the whole video because we haven't got time um let me zoom into the northeast and the these are images taken off the urban observatory website so the image is on planes but most of the rest of it is done and blend us that's an example of that's all the sensor network that we've got across the city out for the coast from the airport um and so let's talk about some of the data we can look at trends in the data source and again we've written some uh one of our researchers written this code which turns the csv data into bar graphs and things slightly unnecessary but we wanted to be able at some point to make this video 3d and we thought it would have more impact if the graphs were actually three-dimensional um and these look at sort of general trends in their quality and points out that there are several times over the course of just one day where the sort of pollution levels are exceeded in our city then we zoom in and we finally gets to the 3d model we can overlay some data on top of that i'll show you that we zoom in to focus on the pride area or the area where the march runs which is this route here there's a slightly um slightly motion sickness inducing section where we fly around the city which i won't show you that because it doesn't quite scary on a big screen um and then what we're trying to do is overlay some data so something i say some of this has been done by hand but we're looking at automating bits like this so the sensor locations are all generated automatically from csv data and then we've tried to show visually how the air quality changes over time so what's got is these sort of circular circles around the cliffs which sort of get bigger and smaller as the air quality changes so you can see there's a sort of very big dip and this is while the roads are closed for the pride march and then it goes back up again afterwards and we're combining that with the graph and then we've got sort of a few more graphs talking about the difference between the parade route and the rest of the city and that sort of thing and then quite a cool bit at the end where we're presenting our conclusions where i've got this sort of um transition chains or daylight over over the city so that's one example of how we use blender to try and tell engaging stories that's nothing particularly new or revolutionary you'll have seen those sorts of things before it's mostly um sort of standard animation stuff uh but i also wanted to show you a couple of slightly more complex and projects so the first one is called telescope actually i will just i'll show you this as well because that's much easier than talking about it so we've got this image of the city which again has been generated from the same data set and the same 3d model of the city um but what we've done is created this it's an enormous image so it's in comes in different sort of layers but the base image is a million pixels along each edge we think it's one of the largest uh sort of digital images of any city in the world that's ever been produced um why would we do that you might ask and you would be a very good question but the point is that we're trying to show data about the city on different scales so we can look at the sort of scale of the city or we can zoom in and look at the area around the um campus where we work or we can zoom right into the level of our building and this is the building where we work which is why we've got access to some of the uh sensor data from this brand new building and i can actually zoom into the level of our floor that's my office there i can zoom into the point where we can see my boss's office um and actually i i don't think the data sources are connected anymore but you used to be able to tell when they people were in their office by the no two levels and how they change over the course of the day but somehow the lecturers didn't like that so that's been turned off but you can see how we can sort of visualize the um data about the city in a sort of interactive way on a very large scale um the way that we do this um is by um uploading it to cloud so we use Microsoft Azure we upload the model it gets set out to a number of different nodes on the cloud um the images get rendered and then stitch back together i say it's a trillion pixel image and it's the equivalent of 65 000 4k images all generated on different nodes and then stitch back together uh if we use 128 nodes it takes about 24 hours to render this image and if we do it on a thousand and 24 nodes it takes about two hours um which gives you an idea of the scale it costs about 5 000 pounds every time we want to redraw this image which sounds like a lot and it is but if you compare that to the amount it would cost to buy the equivalent hardware to sit in the building and actually be able to run this in-house it's actually quite a dramatic saving so that's one example of a slightly more complicated project that we're working on in blender and if anyone's interested in reading more about that there's a link to that i don't know if we can share the slides but if you can get access to the slides there are links at the end which i'm sure if you want to have some links in the description of this talk and this schedule then we can make that happen yeah i can put the links somewhere then you'll be able to read the papers which have just been published on this and then finally um the other project we're working on at the moment is trying to visualize uncertainty or what we're really doing is visualizing additional data inside these visualizations so we're looking at the the glyphs that we've designed to represent the data in the scene and they are a sort of colored circle which represents the value and changes color depending on the value and then you've got this white circle and the black circle the main purpose of that is obviously to distinguish it from the background so that you can read it more clearly but we did think could you show an additional dimension of data using that same system so we thought we'd try and change the way that the white shape works to represent in this case um uncertainty so we came up with this idea of sort of linking uncertainty to visual complexity so uncertainty is a very complicated thing to explain and i'm not very good at explaining it so i won't go into too much detail but it's a measure of how confident you are in the accuracy and reliability um of your data so we've got the system where if your sort of higher level of confidence it's a straight line a very simple shape and then this line gets more wibbly technical term there as you become more uncertain about the values in your data and so based on that we've created a series of glyphs which include uncertainty so we've got the sort of least uncertain on the right on the left sorry most uncertain on the right and we're running a number of tests at the moment i think there's a paper that was published a few weeks ago which i've concluded the link to at the end as well um which shows how users have responded to this when we've tried to use it in person because we've now replaced some of the um the systems and overlaid it um i think that was most of what i wanted to say i say we're working on turning it into a proper plugin for blender we're working on integrating it with microsoft power bi so you create a dashboard that calls out to power bi in the background to blender in the background and returns the resulting image inside power bi and generally trying to automate as much of this process as possible including things like making sure the glyphs are always facing the camera and laid out sensibly and that's pretty much it so that's just a couple of examples of how we use blender for visualization at new castle thank you uh yeah i had a question about i stand out is that the glyphs here are all they look like they're vector images uh they are they vector graphics that is a very good question i think the geometries generate in r but they mean they are they are three-dimensional objects they're sort of made of cylinders and they i can't remember how the white shape is generated but so my question other than that is um you're you're lying on the very large rendered pixel image and um would it be easier possibly to use vectors just to you know for to visual for visualization is there is there a reason that it's it's pixel data instead of vectors good question i'm not sure what the answer is oh okay there's um i've sort of been been brought onto this too um it was an existing project i've been brought onto i'm not sure what the origins were i think we've got this 3d model of the city in which trying to use it for interesting and cool things that's cool no more questions then you have a question um where does the uncertainty come from is it from the sensors itself or so we had some discussion with the um scientists who run the open observatory and they said one of the ways they measure the reliability of the sensors because some of them are brand new very expensive high quality sensors and some of them are cheaper ones or they're just getting towards the end of their life and apparently as some of the sensors get towards the end of their life they produce a sort of bigger variance of results so in this example we're using the variance in the data coming out from the sensor which suggests that that one on the corner of the pointy building for example might be um unreliable or sort of at the end of its life okay thank you it's in this case it's variance but we could use any measure of uncertainty depends on the application area and can we find this graph bar graph drawing stuff somewhere online not yet we've it's sort of in github but we're in the we're in the process of sort of open sourcing it and trying to fix some bugs um so it will be available at some point we hope great thank you so then i give the word to adam thank you need to switch to presentation thanks okay hello everybody my name is adam kalish um and in the next 10 minutes i would like to introduce how i am using blender for my research the slides by the way are online so we can um watch it live um yeah i will start with a brief introduction and then i will give you some examples of visual slam sensor data fusion and machine learning and i will conclude my presentation with some yeah controversial statements okay um i'm currently a phd student at the Friedrich Alexander University in Germany in Erlangen and i'm mainly focusing on monocular visual slam and sensor data fusion this is what we do at the yeah at the department at university so we um focus our research on the localization and um navigation of autonomous robots you can see in the video some um projects from our students where we also use machine learning to of course localize objects in the 2d image and then later put them into a 3d map of the environment uh i will not show the whole video because um yeah you can see the slides online um in 2017 i already gave a talk about how blender is being used at two universities in Germany and at the end of the talk i i introduced um that i will start my bachelor fee my master thesis sorry um about um yeah 3d reconstruction based on machine learning and i also said that i wanted to include it into blender if it's somehow possible um it turned out that the topic changed uh to something um quite similar which is the fusion of visual slam and gps and this is what my talk will be about today um yeah the question is what is visual slam visual slam is as you can see here in this video is a way or an algorithm where you can use a camera and you can localize that camera if this camera moves and you can simultaneously build a map so slam stands for simultaneous localization and mapping and visual slam is the version where you use cameras to do that and many of you might know that we have this already in blender included um this is called the motion tracker and here you can see that usually you start with tracking features so you pick specific pixels in your image and then you try to um yeah find correspondences between image sequences or across image sequences and um this approach that we have currently is um is abstracting the image into feature space so you basically get rid of all the pixels and only keep those that you have already tracked and um it continues with um you have your two cameras for example the red dots visualized here are the feature correspondences so basically your tracked features and then there are algorithms that can estimate the camera transformation so basically how do we get from one camera to the other and also estimate the 3d position of those feature points here visualized in in blue and what blender does internally is it tries to um get this estimation projected into one camera this is the green dot and then tries to minimize this error this geometric error um between those feature points and um this is also what you see when you have the solve error in blender so minimizing this error is one method and what you usually get from this is yeah a very sparse point cloud um so only those features that you picked are usually tracked and the sequence before was from hollywood camera works um it's a standard example when you want to um integrate objects into your scene um the other method that i have investigated um not the indirect method is the so-called direct method and the direct method does not look at the geometric error of those feature points but it looks at the so-called photometric error photometric error means you have two images with intensities so basically the brightness of a pixel and then you try to um find an estimation of a camera and 3d structure so that those intensities match and they um subtract to zero and those are two methods that can be used but both of them are not perfect and in my research i investigate what is influencing the robustness of such systems and therefore i used blender for generation of the data sets which you can see here so there's a trajectory that i um how i move the camera in the scene and by using this i know what is my ground truth so what is my reference trajectory and then i can later compare them and what i have found for example is that um if i take the camera and i and i only translate it so i do not rotate it in turns for example then uh the reconstruction is quite good as you can see so it's um yeah it's quite close to the reference but if you take the camera and you also rotate it then you get a result that suffers from drift so investigating those factors how they influence your estimation is what i find quite um interesting to solve those issues we usually can use an approach which is called the sensor data fusion we already um heard that we have several sensors that we can use and the same is true with sensors that can measure our localization and such as a sensor could be gps for example so here in blender i have an example implementation of how sensor data fusion could work in black you have your reference trajectory so this is how the camera is animated in blender then in red you have your gps measurements where you know the um yeah the uncertainty of and in in light blue you have now the trajectory that was estimated from the visual slam algorithms and in green you have the outcome or the result of the sensor data fusion and also visualized as the uncertainty of your position which in this case is um vertical because we only measure x and y as the position and not the vertical position and this is quite interesting to um to investigate how this is actually um how the results are and we can directly visualize it in blender and blender can also automatically create plots for us that we can use in our research later as the last example i'm i want to quickly talk about machine learning and machine learning is an interesting way to learn to let robot to let robots learn um how to understand the environment and i recently faced an issue with that because you usually need a lot of training data and um what do you do if you have objects where there's not a lot of training data available and let's take for example the susan that we know all and we want to classify this susan so this tool is able to first visualize you what your bounding boxes are so what is the actual classification and detection capability as a perfect um version and then blender can automatically create you the data set for that by just varying several factors that you want to analyze how they influence your machine learning um outcome and this is an overview of what can be generated of course all of you know we can make a lot with blender so about my conclusions um first i think that it is interesting to think about the idea of implementing or integrating those direct slam methods more into blender because direct methods can also reconstruct you not only corners and blobs but also edges for example and by having those reconstructed we can get a richer point cloud where we can use that to integrate objects into a video for example um we should also combine ideas from both the rendering world or image synthesis and image analysis computer vision because if we could try to or if we could accomplish to integrate them very tightly that for example i render an image and then i directly use this rendered image to analyze it and to reconstruct the 3d world we can provide such a pipeline that works um i think it would be very interesting and last but not least the synthetic data i think it's a very um good way to to evaluate algorithms to train them but of course many people think that we should not use synthetic data um for those tasks yeah before i end uh short um yeah welcome to everyone who's in Germany in Nuremberg to just visit our blender user group okay thank you any questions some was wondering why would you not want to use synthetic data or because that's the only thing you can really control and measure right that's true that's also what i think but synthetic data is um very ideal of course you can um create realistic renderings um but all of those are still an approximation to our real world still of course if we can manage to create very photorealistic exam results then of course that's the way to go i think yeah i suppose you use synthetic data to test everything yes but also to train as you saw in the machine learning example and uh training with synthetic data is not create not not like um mapping the real world no that's the problem yeah yeah is there any any um studies to the effect of training a neural network with scientific uh synthetic data and then applying it to the real world currently about to um make public to publish results but there are already results yeah also where blender was used i cannot cite the paper at the moment but there are um works yeah i wanted to ask similar question whether there is some let's say comparison between uh quality of training of uh or on some real data uh compared to simulating data how are the results yes this is also um in those papers uh in our research we found that if we train only on synthetic data we get up to an accuracy that's the mean average precision MAP score of 0.5 so 40 accuracy on only synthetic data in our examples the problem is if you don't have a lot of training data you also don't usually have a lot of data to evaluate on because you want to evaluate your models on real data of course so that's a bit of uh yeah it's difficult to say i will repeat it here um we currently i personally use supervised methods at the moment unsupervised methods are also interesting and also like um semi supervised um so the difference maybe for everyone is um supervised you know what is your reference and then you train on that on supervised you you usually um use other measures of uh comparing that so not directly for example the bounding boxes but um other data you can compare to so the question was do we use supervised or unsupervised methods in that case thank you very much thank you yes uh the air conditioning has been turned on so it should cool down a little bit but yeah they do have a pumpkin for halloween already but we can do this one open and live with the noise for a bit right is this okay for you can still understand me with the noise good they're okay yeah fine about me so great that means the last 30 minutes are for me not really i won't so uh i won't introduce myself anymore just wanted to say two things one is the place that i work at sir sarah here in amsterdam is more or less a high performance computing center although that's a little bit too simple we do a bit more than that so the people i work with my group works for is mostly scientists people that do data analysis stuff like that uh and recently we remodeled our office so lots of uh glass walls everywhere and of course they asked the visualization guys to provide some visuals so we fired a blender took some data sets and what you see over there is all made in blender high resolution renders and it looks really nice and i think everybody's happy with it so okay so one of the things my group does is uh we have this course uh where we basically allows researchers and scientists to uh learn blender see how they can load their data sets into blender make nice renders etc this is within europe there's a lot of interest to this so we we tried to keep the course uh at least two times a year um but one of the things that we always get is how do we use volumetric data in blender because this couldn't support doesn't appear to be optimal right and lots of scientific data is volumetric there's different times of volumetric uh grids grids that have for example varying spacing like in the top one uh grid set the depth to the thing that you're simulating grids that are hierarchical etc um but it appears that cycles is not up to all of those types of data right so i think the the regular grid fits in there but and then loading that data isn't clear either how do you do that is there a common format to do that or not we're not entirely sure we've done some testing doesn't seem to be easy um and we understand that volumetric rendering might not have the highest priority in blender development there's all kinds of topics that they want to work on so that's fine um by the way this happened yesterday evening on twitter apparently stave on where near is the water going to have a talk suddenly in the schedule about volume rendering in cycles so that's going to be interesting uh this wasn't there yesterday so in my quest to see what can we fix this somehow to get better volume rendering within blender i stumbled upon osprey if you're into scientific visualization you know tools like pair of viewer visit you probably heard of it or even played around with it it's a library from intel c++ library uh it's aimed at interactive rendering so uh high frame rates uh doesn't generate the best images out of the box but you can still create very nice images like in the bottom there's actually two types of renderers inside in uh in osprey one is what they call the past phrases for the beautiful stuff and then there's the cypher's renderer at the bottom sorry at the top for the more functional images um and actually it's part of the one api framework and it contains two components that are well known i think to blender so the open image denoise is what is being used for the 2.81 denoise node that's the basis of it and embryo is also being studied to see if it can improve cycles performance i think um so the nice things about it got good volume rendering support it got something which i call scientific primitives i showed it on the next slide uh a very interesting is that it allows you to render distributed way so multiple nodes with multiple cores and you can really scale up your visualization so about the scientific primitives the nice thing about is that they allow you to generate basically a scene or a 3d scene with from the data directly so if for example you have a point set uh now in blender you could probably put that in a particle system then model a sphere instance that within the particle system but then you have explicit geometry right you have still polygons and osprey allows you to render an image like that with perfect spheres that are not explicitly within the data it's just the position and the radius um while still being able to color it and change the the radius and there's different types of data that supports for example volumetric data slices on volumetric data iso surfacing and the usual stuff like triangle meshes etc so it's a good good basis interesting technology um and these days you see that tools like parafume visits start to use osprey included already for a couple of years and well the way the parafume people call this is this is high quality visualization so this is different from the normal scientific visualizations got nicer materials like the gold stuff it got nicer lighting etc so it's interesting to see those tools stepping up towards the blender level of uh visualization basically but the question then becomes well why would you not use those scientific visualization tools for animation and rendering why would you want to do blender so i have a horrible slide you probably agree with me good i would actually be interested from the panel but the audience is well but once i start explaining this if you agree with me because i think blender versus scientific visualization tools there's a whole lot of differences between those so first of all blender is focused very much on the creative usage although that might be too narrow as well and you can do very functional things in blender scientific visualization tools are usually about data analysis communicating with your colleagues etc um with blender most of the data is things like 3d models characters etc all created for a purpose right to maybe render an animation or create a visual while with scientific visualization the data is probably given that's the output of a simulation and then the visualization starts so that's the difference i think the scenes within sci-fi is are usually a bit simpler you have like a data set and some stuff around it like lights etc but the tools especially the scientific visualization tools are not really good in helping you do the creative stuff so just running do you agree or do you disagree have a different opinion yeah okay good and one of the things is that you can teach a scientist to learn blender we know we've done it but usually you cannot teach a creative guy to learn one of the scientific visualization tools other domain specific hard to get into very even the blender interface becomes easy compared to those things so well so that's one and then the second thing is well once you have scientific data how do you get it into blender what's a good way and there are personally two challenges one is when you have a large data set you don't always want to load it into blender completely it doesn't really matter for example if you're only animating a camera why would you need your one billion points in there you just need like a proxy um you might have full your metric data that you can't load and you don't want to have all the scientific data within your blend file because that doesn't make any sense when you keep it separate and the other one is already touched on is it goes through the python api would in the plug-in system be better yes i would definitely say so because going through python and blender scene elements and then cycle scene elements to render is a bit much so my little experiment that i did was to render something in us pray while controlling the camera with some scripts through blender and that worked really well and then i started hacking on it for a couple of weeks and even months by now and the term well it's turned into this blood spray add-on uses osprey for rendering client server architecture i'll go into that a little bit later and plug-in system to load the data at the server side so the client server setup so you got normal blender network connection to the blood spray server which is based on osprey the nice thing is you can run it remotely so you can run it on your HPC system close to the data on a machine with a lot of memory a lot of CPU etc while the system that runs blender can be light you can even restart them separately so that's easy in terms of testing etc it has a bit of a downside there's overhead of course the data has to go from blender to the server there's latency due to the network if the bandwidth is low for example on wifi that becomes an issue and a file over there isn't the same as a file over there so pass her a bit of a weird thing then something on the plug-in system so what the plug-in does it gets represented within the blender scene with a mesh for example the cube over there you attach the plug-in to the mesh then set a number of parameters which is the the properties that you see there when you render all that gets exported to the server the server creates the osprey scene lets the plug-in do its thing in this case loading the data creating an osprey volume out of that that gets added to the scene osprey does the rendering and sends back the frame buffer so in this way all the data processing and all the osprey things are done within the server and not in blender this is basically a list of what i just said the important thing is that blender does not know anything about what the plug-in does it only has the mesh a bounding box or something else in the parameters and you can do interesting things with your plug-in so i have one for example to use the open asset import library which can pretty much load any polygonal mesh pretty quickly much faster than most of the importers in blender which is interesting read from a cf5 file generates some testing data etc so one example actually two examples this is from the University of Tventa here in the Netherlands they do a simulation on wind farms and when you have one windmill it doesn't really disturb anything but once you get a hundred in row the turbulence of the first one starts to influence the next one etc so the deficiency is very hard to model depending on the wind direction and speed so they do this simulation there's the reference over there um this is a visualization that i did so it's a volumetric data set the size is up there i have the color ramp that i abuse to to set the transfer function on this volume and this way you can still edit the the velocity of the opacity and the colors etc this visualization will not be correct at this point because i haven't checked it for correctness this also works partly already in the interactive render although not everything there will will work as expected so the windmills are just blender objects meshes we've got the bounding box for the volume and once you start to rendering the osprey render server will do its thing and send you the the image and basically this is just normal cycles like interactive render okay i'll stop it there another example uh this is data set that i showed two years ago here at the blender conference i was struggling then to get all these blood cells visualized in cycles and i ended up hacking cycles well i don't have to do that anymore so uh one box one plug-in that loads all these data sets so it's basically two small meshes that get instance a couple of million times a total of 10 billion triangles rendered in osprey in under two minutes and the nice thing is the blender file stays really small because the only thing that's in there is this box object a light and a camera etc and all the data is outside still editable and you will load it at demand so final thing obviously people interested in the performance compared to cycles mostly uh take the next slides with a huge grain of salt that's what that image means because we're comparing apples and pairs more or less here so it took the bmw testing scene from the from the benchmark uh converted it because of our libraries in there it's all objects now so i can edit the materials um obviously the shading networks that in the cycles cannot be duplicated in osprey because it doesn't have that so i just picked a shader that i thought was more or less the same thing but not exactly um at the same number of samples of 35 osprey is about 10 times faster however you can see that the image is much noisier with osprey which is i think to be expected osprey is aimed at interactive rendering uh and the intel guys will probably just use the denoise after this to get a better image out of low number of samples so using a bit more samples to get to the same image quality here's a hundred still four times faster but still too noisy then at 400 samples you start to get into the same ballpark and then the time is more or less the same between cycles and osprey there is quite a bit of overhead still because for every sample being rendered on the rendered server it gets sent as an image back to blender and that's a bit of overhead so if you want to try it yourself i've released it on github you need osprey 2.0 which is not released yet but you can just clone the repositories two-wayed only and a whole list of things that might or might not work you can see the read me if you have any more questions or ask me um that's it and finally some thank yous because i needed quite a lot of help with this at least information etc so the osprey development team the dev talk uh the blender seat developer because developing an add-on for the render engine isn't easy it's a lot of stuff in under and the two universities gave me their data sets in order to use them so thank you very much it says uh using the blender render api yes okay yes so what do you expect if you get more and more samples you would get like a better shading quality compared to cycles oh i don't know i don't i just what i see what i see basically it's it's not about the the amount of sampling compared to the the sampler use so if yous like a halton or use certain stratified sampler so that's why i i can see that it's it might be mainly better to like to pick more you know a better sampler than just increasing the number of samples okay i have it looked into detail and what osprey does yeah the thing is the support of osprey of course to to past research is good that we can generate like you know some bit of more physics of course in the scene but to me it's always like there's still a missing thing missing layer which would actually boost the the shading quality compared to cycles okay so that's what i wanted to to know whether like using osprey was basically for creating visuals with stunning quality and get a bit of more performance using osprey or it's just because of the limited support of blender to volumes for volumes was i'd say the initial reason for looking into this but then i started thinking about the data problems and then the client server things make more sense to me and then osprey right now is what i have and if i can get more or less to say look i'm not we're not trying to make very beautiful scientific visualizations because it doesn't make much sense most of the case it's about the data it's not about perfect if it's about like i would reach and getting funds sometimes you might need to to produce outstanding you know question is would you do that for really large data sets because the images that you showed in the beginning were fairly simple one neuron yeah i remember last year's talk from you guys with the very complex yeah uh neural networks the shading i don't think it matters that much at that point my my impression i mean yeah sure no it's just to me there i don't know exactly what was the main motivation for you to write sort of like plugin of course osprey it's and it's a wonderful engine using the intel you know latest cps which can like get the semi-interactive even path tracer which was which is perfect and integrating in blender i won't say as an alternative to cycle but like one more you know rendering engine that would give it like a boost to people to use it really for ciphers or something like this but the main motivation for you to me that was like question mark you know i don't know how to start okay fair fair i have another question sure um let's say in a hypothetical future that blender supports better implicit services and loading data on the fly without saving it to the blend file would you still use osprey probably not the main motivation is the limitations right now in blender okay volume rendering and data large data sets and large scenes that's really an issue yeah you can't create a million objects in blender it doesn't work yeah but so this was a way around hi uh could you back to switch to bmw which one the first one the third one this one yeah this one um i know that the samples in this scene are on the score that's mean this is the an eight checkbox and it's not 35 but it's scored in around 1000 samples per pixel okay interesting uh and what about osprey uh i don't know did you uh copy configuration for the rendering i try to i try to match it more or less in terms of rate depth uh roulette depth etc i try to use the same values where possible but it isn't that's what i meant with its apples with pairs you can't completely duplicate it right so yeah okay thank you sure any other questions remarks love notes oh thank you very much sure so as a as a question to the panel um what do you how does your work environment your university or your where wherever you are how what's the vision on uh both using and uh contributing to open source well i would say it's very positive i think most of the publicly funded places indeed look at open source as the way to go compared to commercial stuff uh contributing back is a bit harder maybe because it takes effort and does money and then the the added value isn't entirely clear for them immediately so so as a software developer i'm very much in favor of open source because it makes it easier for me to get not recognition necessarily but for to make sure that the software is getting cited properly in any papers that it's included in which i think is important but specifically for software development but um the university generally seems to be in favor of open source although there's we sometimes have legal issues with sharing stuff that is believed to be university ip and it's sort of complicated there's been a few sort of spin out companies from the university that have gone on to manage open source projects um so it sort of it can work but those are quite sure new castles got a handle on how it's supposed to work the main problem we have when it comes to these is usually when the project starts they specify what they're producing and often persuading a pi to open source their code particularly if it's carried that they've written in the past that they're not particularly proud of open sourcing it can be a challenge but um i think it's getting better yeah i have to agree with you because um yeah universities usually are open and our universities are also very open but if there are projects with you with industry for example then the industry is often the i mean the factor that it is not that easy to share um code and results but usually the university is also building on open source projects like for example we do with the robotics um algorithms they all work basically on the robot operating system on ross so this is open source and it's great that we have this possibility to use it also of course it's better for science if stuff is open source because it means other people can download your code and reproduce your research and it also means you can build on things rather than having to reinvent the wheel all the time well i can just agree i have nothing more to add well i might have a bit of contradicting thing in terms of open sources amazing but it's it has to be done the right way or not to be done which means that you write on very nice plugin and you'd like to share it before you share this plugin for example it has to be tested very well and it has to be documented very well such that any other person can really build on top of it what i found sometimes and i have also like considered that when i was building my plugin to make sure that i still give support at least for two three years that's one thing and also the way it's designed to allow the users to understand where is the issue because some of the blender plugins i just click a button blender crashes and then it's like a big you know question mark like okay so what's next so most of the people say okay simply we just don't use this piece of software and we build our own from scratch which would in turn you know would take years and that's why i was quite happy when i received some messages saying like you know your plugin is sort of might be good so we are going to build on top of it so i told them okay and i give support to see at least of course every software is is buggy and you have to go through some debugging cycles but if the universities would like put some some sort of like certain amount of budget for example for maintaining the software before the open sourcing and making sure that the documentation is really very well added that would be the missing item which would boost open sourcing like open source software to be used by other universities and other people but in but essentially it's a very very nice thing if you write some piece of code and then you share it with people it might serve different use cases different domains but at the end it still you'll get the uh i would say you you'll get it in return of course you get cited or you get the credit of having your software you know being used so it's something hello from my university forces i think way to go and i'm pressing this very strongly very much everywhere so working with commercial software which is closed is no way to go for us that's the kind of statement and that's the way i see our university is doing so i think that's so should we teach our our computer science students and other students should we teach them how to contribute to open source do you think there should be a course in that or should they figure it out for themselves well you could make an argument about society in general should you give back in what way so maybe within that context you try to take at that one but in terms of software i don't know people don't kids don't even learn programming properly right now i'd say the starting to become common but it's not good enough so i don't i don't know i think it's uh it's an essential thing to teach them how to make it i mean one day another rather than just using windows in what like okay we would like to use opunto and then why do we use it it's like we keep we'd like to keep developing open source software so if you just teach them give them the seed where with which they can build on top of it that would be fantastic but if the thing is how how should you make the curriculum you know of such course or like the approach itself to motivate them and push them in the direction that would be the the question that needs to be answered i think i think it doesn't even have to be like an entire course about contributing to open source but just one class one one approach yes um when i was uh co-teacher for the computer animation course in in uteris um the students had to read certain papers and have written essay on it and i also told them like these these papers are written by people it's actually authors it's people who who are proud of their work and if you don't understand something just send them an email they're they're happy to hear about that somebody's interested in their work so they will respond and they will explain and some people students minds were blown like this is not just homework this is actually people we can talk with we can collaborate with and i think some a message like that we should also include about open source and about um collaborating on something instead of grabbing something from the internet that's for free and using it it is something that's alive that has people behind it that a lot of thought has gone into it and i think it's all about collaboration that can also help in the scientific collaboration as well i have basically nothing to add to those famous words um i think it's not only about the communication it's also about like some people might have the exact same problem just on another part of the world where you can make a difference if you help that person to change our society and also like uh foster the development of new technologies maybe well um it was at the same university and we were also talking or teaching and programming to students and we had a special term for one thing and that was called comet angst and that's the feeling that you're fearful to commit something to a project yeah like you have written the code but you don't want to commit it because everybody sees it and it's only team and if i make an arrow then people will blame me and stuff and we have to teach them that it's okay to commit and to put things out and they won't die if they do it so that's a very important thing very good thank you sorry i was thinking about your last question before this um i was working in the university and i did develop some programs and i wanted to give it back to the community but i was paid to do the research and then when my funding for research ended uh the program died but still used i heard last days it's still using it after five years after i left at my own place but i couldn't give it back and i also have about 500 gigabytes of data that i did record that i wanted to give back to the community but i can't get any funding for that because well it's not research it's giving back to the community and there's no money in the system at the moment to give something back but what you already have done and developed and just get the final touches to document exactly what the data is document exactly what all the programs are doing and probably could have taken me about half a year just and then it would have been finished couldn't get money for it so uh data is lost programs are are not supported anymore so i think there is something uh that we need to think about as as community thank you very much i think on those words we have to end this because it's time thank you so much to our speakers