 Thank you very much for the introduction Yes, I'm talking about pirate pronounced correctly pi RT pirate But let me talk first about what I want to do First I want to have a way To render my scenes from indoor to outdoor and I mean from planet Really planet scale to indoor. That's my first challenge or at least I try that the second challenge I want to have good quality and Good quality means I want to use global illumination. So I really I don't want To have the result we see in the left. I want to have the result in the right So if you paint your wall green or red Then you see on the floor. It's also a little bit greenish and reddish. So that's global illumination So we really follow the path of light So let me show you my previous work What did I do the past? 12 12 to 13 years I Created a virtual globe started in 2005 You probably know Google Earth or other virtual globe technologies. I created my own. It's called open web globe and Was originally written in C++ and then in 2011 With the advent of webGL we moved to JavaScript Wow, I say that at a Python conference is throw the tomatoes Okay, it's really a JavaScript library and you can do some cool stuff like adding Much geodata. You can have all the photos added. You can even have point clouds added large-scale point clouds We had millions of points billions of points on on the virtual globe So the data processing part was also written in C++ and was moved to Python So there is the Python part so you can put away all the tomatoes again Yeah, exactly Let me show you how it looked like Here we had a data set from Switzerland About I think three terabyte of data And you can fly around you don't see really buildings here. You see the elevation model and the orthophotos So in 2013 a master thesis started to add buildings But the big problem about buildings is Where do you get buildings of the whole planet if you are not Google or if you don't have a budget of a few billion dollars So one solution we came up with is why don't we use OpenStreetMap data? So maybe you know OpenStreetMap. It's a 2D map Many people do Mapping with crowdsourcing so you can really add your own buildings and there are some people who actually Add some tags of the roof shape for example or of the building height etc So what if you take this data and create 3D model of the whole world? So that's what we did you see on the top left a scene in Switzerland where People didn't really add the building height. So we estimated a standard building height there on The right side top is New York. It's much better there Someone or some people really did add the real shapes of the buildings and that's really inside the OpenStreetMap data set and Bottom right we had we wanted to add some we call that buildings of interest some some textures object in a Buildings of really interest where we where you have a texture object and you can replace that from the OpenStreetMap data set on the bottom left you see the Forbidden City in Beijing and you see there The typical roofs are also not there yet So let me show you very very briefly how it worked We also used the OpenStreetMap data as ground layer and added some buildings. Here's a building of interest Pyramid Not very Not very special. You can zoom around Then you can go to another city. I think the movie froze No, okay. You it's just slow then let's go to London And you see the loading speed, okay, it loads some buildings Up here. It's we simulated a little bit bad bandwidth there. I had to admit, but it's a common bandwidth you have this with some I think normal mobile phone So 3g and then the buildings of interest are loaded to You see the bridge is not complete because this is missing in the OpenStreetMap data set. Okay, that's what it looked like And how did we do it? It's quite easy. We use some projection I don't want to go into details here. Also with Python. There is a great library called Proch4 where many many projection systems, cartographic projection systems are included and Everything is of course quad-tree-based. It's the same principle like 2d maps where you can zoom in and zoom out The only difference is that you look at the virtual globe from the side So you have different level of details at the same time And you also have to consider different tile types So you have the classic orthophotos 2d imagery. You can have 2d vector data You can have 2d elevation Tile while 2d because it just is a height map basically and this height map is converted to a 3d model This yellow shape we see there Okay, so We also added some special geometry tiles with special triangulations to have a better quality And we mainly used the GDAL library With the Python binding to access raster data including the elevation. Okay, that was the previous work So Yeah, I was not really happy with it Yes, it's all the rendering speed is not really that great and Another problem is the ease of navigation is not that great Maybe all of you know Google Earth. Most people look at it one time Maybe two times and then they end up using Google Maps That's the truth, unfortunately so and Another problem is the support for mobile devices is not that great webGL runs on mobile devices But you really need the latest and most expensive device to run such a scene And of course the most important argument. I want more Python So we came up with some ideas why don't we just take the system of a 2d mapping system and Render it from the side. So you actually see the 3d Building, but it's just a 2d map. So it's the same like a 2d map So you can have a city model For example, Rimini in six level of details and if you do six level of details You probably have two to three buildings per tile So that's not much if you have two to three buildings per tile. You can really render that extremely fast So the good thing is you never need the whole city model in memory even when you you render that because you only render one tile at a time and This means the rendering is really fast and the best part is I can Run all these processes in the cloud So this can be really done with in parallel by request or I can pre-render it So how it look likes I have a zoom level n I didn't make too many just for for this slide So you understand the principle and I say okay in zoom level n I render it in the best quality with an offline ray tracer and Then with image processing I create the remaining level of details So let me show you an example. We used open data set of the city of Rotterdam We use it because it consists of city gml files of around 3 GB and has Textures of about 500 GB So here's the browser we enter Yeah, we enter the URL and you see not even one second later you can interact in this scene And the good thing is we can do some more stuff like interaction So we can select some parts you can zoom in zoom out So now we wonder how did we do this selection in a 2d map That's also quite easy we use a very old principle from the 1990s called the g-buffer A g-buffer is a group of image types First we have the color information just color of the buildings the second is a normal map where you store normals of each pixel The third is a classification type of map. It's a color ID map So for every building or every building group you have a color So with this color you can get the ID the ID back And then the fourth I didn't really put the picture here because it makes no sense It's the depth map for every pixel you can get the depth value and with this depth value You can recalculate the exact 3d position in the scene so you can even measure within this system So this is the principle now the hard part was How do we render it? So the first we did we used the render man. It's the best render package of the world Then we used Poffray great package here. We see on the on the left is a scene Created with ray tracing you see and project was not that good But you see the texture quality is much better and we even have shadow there on the on the right side And on the left side is a is a classic web gel based viewer So we used a Poffray but the problem of Poffray I was that we did not have a good support of the depth map and of the normal map because we want to have a high Precision depth map with 60 at least 50 64 bits per pixel So the only solution of course was to create our own render and that's where pirate is coming from Pirate is a ray tracer which uses Python 3.5 it must be 3.5. It can't be less than 3.5. I Will talk about that in an instant The primary goal is we want to render high quality images. We want to support several different Rendering techniques and different lighting models and of course we want to have a two-peter integration so we can render Images in a two-peter notebook But the most important as I mentioned before is we want to have price a precise depth maps normal maps and object ID maps and we want to create that in the same render step So we don't have to render for images. So we just render one time and we get all this information commercial and open source retracers do not have that support and Also an important thing is we don't want to have a fancy GUI like blender We want to run this in the cloud and we want to have it really as a command line render So let me show you a simple hello world It's highly packaged into different sub modules So you have a math geometry material camera and so on module We have a render module which can be abstracted so you can add your own render It's the first to create your camera. You create your scene. You add triangles or geometry to your scene You specify the material and then you create a renderer and render it So now you may think why do we do that this complicated? We actually write code to to render a scene So this is quite easy to support different formats and we at the end we have a Python program which describes the scene This is how the two Peter notebook integration looks like so you can really add your Geometry there you can render it and you see it there One thing I didn't mention yet this it doesn't really it's a pure Python It does does not depend on any library However for for this two Peter integration, we check if pill is available to create an image and we check of course if two Peter is available so we can Create a result in HTML So let me quickly show the principle of right racing. It's really a very simple principle We have light source and from this light source we have rays and Maybe this race to hit an object and maybe this object Reflects this race and maybe if very very lucky these rays hit the camera so maybe we shoot one trillion rays and 2030 rays hit the camera So you see it's quite expensive And then some people came up with the idea why not do that backwards We start from the camera. We shoot some rays We hit an object and then we see if there is Light or maybe it's occluded so we have shadow and with this principle We only need it the minimum one ray pair resulting pixel So if you have full HD you only need two million rays I'm coming back to that You all know how fast Python is So, okay So we have some more few features. I've already told a few things about that I'm not going to date us because I see the time is running short, but now the speed to really accelerate Pirate we had a master thesis last year Actually finished this year and It used the Raiden rays API Raiden rays API is a ray tracing kernel where you can do One thing you shoot rays and you check if it hits something. It's really very very simple It uses internally open CL to support different GPUs and CPUs and For our experiments we used the NVIDIA GeForce 1080ti It's it has 8 GB of memory memory bandwidth of 320 GB per second really nice and theoretical compute power of 11 teraflops I'm not going to details here. You can look that up if you bought a 11 teraflops machine 20 years ago You you need several billions and billions of dollars to do that or euros doesn't matter. It's this much Okay, so let me show the result so if you use Native I call it native Python We on on on my machine. We came up with 12,000 rays per second Yeah, and if we use we still use Python, of course and the GPU we have Almost 90 million rays per second. So it's really big increase of compute power So let's look a little bit at lighting models one Very nice and and easy to program Lighting model is the ambient occlusion model It's it results if you compare this with no lighting or with the classic plane phone It really looks much better and and but it's a it's a fake actually It's a it's not a real global illumination, but it just looks good basically and it's quite easy to implement Yeah, of course No, it's really easy you just shoot ray and then you see how many rays if you shoot it again hit An object so you get an occlusion Number and if many many rays hit another object and it's darker And if for example, no ray is hit or almost no rays hit so it's lighter. So it's a really simple to implement solution So in the master thesis of Marcus fair he implemented that and The result looks not too bad So if we have if we see the number here You can check you can test different number of rays again if we use a number of 200 it already looks quite well You can use a filter to increase quality and if you compare this with the regular plane phone a city model For example looks I think it looks much nice Or and if you go if you zoom into a city model You see if you compare it with the plane phone used in real-time graphics mostly Okay You can also use ambient occlusion in real-time graphics if you look at computer games, but it looks really quite quite good or another scene Recently created looks really much better if you invest some time with a good lighting model So we come back to the occlusion map So we want to have a map at the end So this movie shows how to create such a map where you can really interact with This is you can pre-calculate you could also pre-calculate some movement if you want but you need really lots of frames and and do a really interactive view of this of this city so This year we had another master thesis we come back to the OpenStreetMap data set So we wanted to have a global way to render all buildings of the world with OpenStreetMap data So the solution the student came up with was why don't we just take the OpenStreetMap 2D map and just put The buildings on it 3D so from the side So it's a fake it's 2D and 3D combined in a but it looks it looks not too bad So we did this with with a New York City Here is a scene with a satellite image from Mapbox combined with the with this Render technique and let me show you quickly a live demo if I have time to have time. Okay. Oh, I have 10 minutes Great, so I was too fast already. So we have to wait for the projector Which seems to be slow today No problem. Oh, I'm in the wrong directory here One second Let me make a demo one that's actually New York City, you see this is just Manhattan basically a part of it and You can zoom in now. I hope I have internet connection It's great to have a conference with a slow internet connection I always say at this point because if it's low you really see something So that's in the in the background you see the normal OpenStreetMap datum and in the front you see the buildings rendered So that's about how it looks like and then you can say, okay I don't want you see here is it was slow great So you see how it quickly loads in this slow connection and You can also change the view direction here. We only made four from each side, of course It could have more if you want it's also a Matter of cost of the of the cloud storage basically and here you can also see the normals You can have a color map. So here. It's just the colors And to death map however the death map you can't really see anything Because it makes no sense. It's one float stored as image per pixel So you use in disguise 32 bits per Pixel and store that as as an image So why an image so we can download it from the cloud and process it again and and convert it to a normal death map So that's basically the demo. I wanted to show see isn't it great So it's really It's really the 2d if you look here. It's really a 2d map from from top It's just a standard open sweep map. So it's really open sweep map Now it's slow. Okay. Now the internet connection is dead but it's a 2d map from top and And on top we add these 3d buildings, so it's really a fake But it looks really like like 3d and we can do that. That's the nice thing on the on the whole planet So let me show a second demonstration of a city which doesn't have nice shapes a city of Basel where I live Here too, you can just zoom in and it's 2d map overlight There are some buildings for example this rush tower here is Someone really did the work and added it to open sweep map Now we just have to wait if everyone if every person in the city creates one building Then we have a nicer map and you see this is how it works and it's really fast for Okay, it's a combination JavaScript and Python again, but I think it's it's a good way So let me come Slowly to the end so we can have Q and a I know this projector will be slow so one two three four five I Don't know why But it's Okay, let me start with the conclusion already while we wait for the image. We saw pirate. Oh, it's here great We saw pirate an open source red racer, which has been written completely in Python with some additions like The raiden rays API, which is open CL and C++ But because it's pay Python I think the code is is really at least from the Python part the code is readable and understandable and We already support some different lighting Models I didn't modes and I didn't really show you all of them yet Mr. GPU acceleration we really have a fast solution and We already used pirate in several projects One was this recent master thesis with the with the open sweep and data and we also have another project ongoing Which I can't show at the moment because of some contracts But their pirate is used to and Outlook we are currently refactoring the master thesis a little bit to put it on github very soon and In future we also want to use more red racing kernels. For example in Intel has the Embry Red racing kernel which is CPU based which doesn't use the GPU but it's still very fast because it uses all the modern things of a CPU and Soon there will be another master thesis which will add more terror and and other other things to do the Red tracer, so thank you very much for your attention So thank you very much Martin for for this absolutely fantastic talk Since there's no session after this. I think we have some time for questions. So feel free Thanks When you see you use GPU to accelerate Pire pirate What do you mean? I mean how pirate can access to the GPU? Do you use any some pipe with a rubber or something else? It's a rubber the CPU a GPU part is completely written in C C plus plus And then we have bindings and and call these these functions. However, this is just an Optional way. So the basic red racer you can you can you don't need a GPU Just have to wait to get the same result Hello, very nice talk What is the precision or do you care usually so, you know, what is the precision of the? intersection and all the Mathematics that is there that is if you place your camera far far away Like 700 kilometers away like a satellite would would look the same spot And you have this kind of precision on a very very tiny place. I mean if you want to simulate something like a satellite image, do you have enough precision or you know that you don't care? So, you know that something is missing. Yes, we have enough precision. There are some tricks There are several tricks actually to increase the precision One very simple trick is to have for example several View frustums and to decide which view frustums you use depending how far the object is however This this solution has problems. There is a very elegant solution by using virtual cameras But you have offset in the camera and you just set this offset to zero zero zero and Offset all your geometry in real time and then you can have really from From far to near you can have the whole planet and you consume to To a house even inside a house without without losing precision there because you always have floating point precision it's in this in this case because In the in the virtual globe for example, you are limited to floating points because it's it's the way GPU handles the data at the moment to be compatible with all common GPUs In the right racer we use double precision for everything and we can have of course a little bit better precision Especially for the for a deep map. This is very important Because we want to measure in the sub millimeter range So we have some use cases where we want to to for example measure some cracks in restraining walls or dams and There we really need to stop many millimeter and there we can create normal maps and death maps with this precision Hello, thank you for your talk. I am wondering did you consider? Trying to use real-time path tracing for like with a bunch of tricks, of course in order to simplify the image storage and everything That's an option I'm open for that, of course But in the first version we don't want to to use that it's just a decision to to advance the project more quickly But in the future we can we can have it we can add it's it's highly Abstract the renderer is really This abstract that you can write whatever you want and and it should work. Okay, then thanks again