 Okay, hi. Yeah, my name is Felix. I'm half Czech, half British. I'm currently living in Prague, sort of working freelance and doing some my own projects. TerraJS is kind of my sort of big project that I've been working with the last couple of months. It's very much in a kind of unfinished state. So what I'm going to be showing you today is a look into the future, which is just basically a convenient excuse for there being quite a lot of bugs. Okay, I'm going to talk about what FutureJS is, why I decided to build it, which is sort of number one question I get from tech people. They often say, can you sort of build something like, you know, Google Maps, but like when I enter my directions into it, then make it so that it actually routes me to the right place. And I say, okay, you want me to kind of reinvent all of Google Maps and their navigation system. So that's not what I'm trying to do with it. This is more kind of like a terrain rendering engine. This being a dev conference, I'm going to talk a little bit about sort of architecture behind TerraJS, some of the WebGL and JavaScript stuff there. And then I'm going to look at plans for the future and some use cases. So first of all, what is a semantic terrain engine? It's kind of the term that I came up with sort of for one of a better name to describe the process of taking metadata which describes a location and then using this data to build up some kind of 3D scene from the ground up. So an example is, say you're trying to render a location and there's like a lake there, and there's a forest there, then rather than sending down the exact data for every tree and the water, you just say to the engine, this region here is a lake, this region here is trees, this is sand, and then have the engine itself make the decisions on your behalf. So the client itself is a lot smarter than if it's just kind of displaying generic 3D content. So rather than focusing on just the best resolution photography that you can get for a certain location, then try and understand the landscape better, generate a model from that data and use rather than using it directly. And the key here is this sort of metadata which is called land cover data. So I hope this will be a little bit more clear if I show you a demo of how this works. So let me just switch that, try and make this full screen. Okay, I'm not sure what the project is doing, but you can see anyway. So here, you've probably seen something similar like this in different sort of 3D globes before, except the sun wasn't in Geneva. So it's okay, this is just for testing purposes. When you get close to the earth, you can't tell. The reason the sun's actually there is because if it's behind the atmosphere, you can't see it. So slight kind of physical issues with this rendering engine. So right now it looks more or less the same. But as I zoom in, then specifically focus on the water itself. I'll go to Barcelona because that would be nice. So the water is going to switch to something which is like a more sort of 3D model of the earth. And it kind of has reflections and kind of animations, all this kind of stuff. As I said, this is kind of unfinished. So this is the only bit that at the moment is doing something more than just rendering the static data. Then there's also kind of the terrain data which gives you sort of the land, the kind of elevation data and all that kind of stuff. And the shading is based on where the sun is. The sun's in a static position as you probably noticed by now. Just to give you a flavor of a different location, I'm just going to jump to the west coast of the US. Or maybe I'm not going to jump to the west coast of the US. There's a feature, not a bug. All right, so check it out. Okay, so the slight downside now is that I'm going to have to pan across the Atlantic. So you guys are going to have to be a little bit patient. Yeah, it doesn't matter where we are. Yeah, so say here by Vegas. So here's quite nice because it's quite a lot of interesting terrain data here. Cool. So hopefully this gives you an idea of what it is that I've built. I'm going to talk a bit more about where it's going to go now. So first towards the motivation behind building this. So it was a little bit roundabout. I was in Prague sometime last November waiting for a friend to arrive on the train. And I was checking where their train was and I was looking at the map that the Czech train companies have online. And I kind of didn't like it. And I thought this is a bit crap. I can build something better. And their train was about half an hour delayed. So I thought, yeah, that's a good amount of time to build something. So about two weeks later, long after the friend had left, having seen nothing of Prague as I was just coding this, I built this thing. So this is the Czech Republic. It's going in 3D. And this was all before TerraJS existed. Each of these sort of black blobs is a railway station. And it kind of lets you do a sort of real time search of the various trains and their locations, that kind of thing. Yeah, you get the idea. So having built this, I kind of thought, well, that didn't take me half an hour. Perhaps there's a kind of use here, perhaps as a niche for a sort of more general purpose framework. But if you wanted to visualize sort of 3D data that you use some kind of terrain component, you could just drop this in. So imagine if I could do this in half an hour. Imagine if I could just say, give me the Czech Republic. This is where my stations are. Plop them on the map. Just work. So that was my motivation. And also by this point, it was kind of fun. So I thought I'd just do it for fun. So I haven't seen all the talks. So I hope nobody's used this one before me. So the quote is, live in the future and then build what's missing, which I think is very appropriate since conference. So to me, what this quote means is if you're trying to think of something new to create, something that hasn't been done before, you can't just constrain yourself to what technologies and ideas you have around today. You should try and somehow transport yourself, say, 5, 10 years forward and see what is going to be available in that time. Like, what kind of data will we have available? What will our computers be able to do? And then try and be creative in that space somehow. So I was sort of thinking, okay, imagine I have all these things. Like everything is way advanced today, like perfect elevation data, computers unbelievably fast, just anything. And then what would a sort of a 3D globe look like in this space? And the kind of concept that I had in my mind is, well, a sort of photorealistic real-time Earth, like the best thing you could imagine. Basically, our planet. So you put on some kind of augmented reality headset and you get transported to New York. So I'm not saying I'm going to build this. That's a little bit challenging. But it acts as quite a good thought experiment to have this as the sort of end goal that you're trying to work towards and sort of think about how can we get from where we are today to this sort of final destination? Specifically, is there just one route that we can take from now to then? Are there kind of different stopping checkpoints which we can have along the way? Cool. So I thought about it a bit and I think of sort of two approaches. So one is the kind of digitize everything approach. And the companies like Google are sort of doing this now. They're flying drones, we're on a place taking photographs of cities or forests, just having cars, just everything, collecting data, collecting data. I can't quite afford to do that right now. So I try to think of a different approach than digitizing everything in the world. And that was to try and understand everything, try and get more understanding of the world around us and then build the model up from that. So as I was saying before, the example is say, rather than trying to capture the entire forest in 3D, then just say, my forest is in this region here and I'm going to generate trees there. Okay. So just to give you a quick recap of what TerraJS is. So it's a framework for creating models that represent places. The engine consumes data that semantically describes a place and then builds a 3D model up from that data. So this being a DEF conference, I thought I'd talk a bit about how this all works. There won't be any code, don't worry. I know you're all tired. But there will be WebGL. And from my workshop, I know that makes people you know, a bit headachey. So I apologize. So I want to talk about four things. First is the data sources. Second is tile loading. Third is terrain generation on GPU. And then fourth is land cover and satellite imagery. So first off, data source. So I basically mentioned all these already. So we have aerial photography, elevation data, and then the important one is land cover data. So just to give you a quick example of what these guys look like. So this is again Barcelona. You should recognize in the top left the satellite image of this region. So that shouldn't be so interesting. To the right of that, we have an image which describes the elevation in this region, where every pixel on that image is basically a number that tells you how high you are above Earth. So the higher the value of that number, the higher you are above Earth. The really interesting one is land cover data. So here you can see the sort of the mass of water in the bottom right. And then on land, you have the different regions. So in dark, you have trees. In light, it's more kind of open land. You can probably even spot a river down there just below all the trees. So we take all these three things, and we want to create this final image. Before we can do that, we have a bit of a problem. We can't just download all of this data for the whole Earth at once. So we need to cut up our data into tiles, and then stream these tiles to the server as and when they're needed. So depending on the current viewport, TerraJS requires from the server which sort of tiles that region requires and just downloads them. And these tiles have got different scale levels. So if you're very far away, you're not downloading a thousand tiles, you just get one big one for the whole area. Okay, so that was all in JavaScript. All of you guys probably know how to build that. Now comes the, at least for me, interesting part, where you get the GPU involved. So I'm just going to talk a little bit about the GPU pipeline, how you kind of render an image like this. And it all starts in JavaScript land with a plane, like a 3D object, a geometry that you send to the graphics card. And a way to imagine this is to think of this plane as like a giant chessboard with lots and lots of squares on it. And each square is a vertex. And this is kind of your primitive building block of your terrain. So you send this entire terrain onto your graphics card without actually deforming it at all. You keep it completely flat. And then once you get onto the graphics card, a program called a vertex shader loads in the elevation data. And for every vertex in this mesh, it displaces that particular sort of square on your chessboard up or down, such that you end up with a result, which is your displaced terrain. So pretty simple or not quite. There's a slight complication to do with level of detail. And I've got another quick demo for that. So this is just procedural terrain, and it looks terrible on a projector. It's not quite as sparkly on a retina screen. So the idea is that if you're rendering a large terrain, you don't want to have quite as much detail very far away, say on the mountain in the pink, because you just can't see all that detail because you're very far away. So it's no good having a chessboard which is uniform over your entire scene. Ideally what you want to have is the density of your chessboard or your vertices to be very high nearby and drop off as you go further. And that's exactly what this is doing here. Each of these kind of regions, each of these colors has got half the vertex count of the previous one. So it doesn't take up as much processing power, but it still looks as good on the final image because it doesn't actually take up as much real kind of screen space because of the projection. Okay, so finally comes the rendering. So we have our deformed mesh, and we're trying to figure out, okay, what color do we want to color this mesh? So having deformed it, we then pass this into what's called a fragment shader, which is a tiny computer program which goes over every single bit, every single sort of pixel or fragment of this deformed mesh and makes decisions based on that current position. So what color should I color this particular point on my mesh? So the first thing that it does is it looks up for that location, what the current land cover data is. So it says, is this a lake? Is this a tree, et cetera, and invokes a different shader depending on where you are. And then this shader sort of draws some water or it draws some sand, and it combines that shader with the data that you have from the satellite imagery. So you kind of get an image that's a combination of the both. It's both computer generated, and it's using the actual color from satellite imagery. Okay, so I said I was living in the future with this because it's not finished. So in the second half of that is, well, what do I need to still build? What's missing? So this to me is kind of like an open list. It's just a couple of things that I thought of, but I'm kind of mainly interested in what you guys think is missing. What kind of features do you think would be cool to add? So the first one is kind of obvious, just better quality data. You can always have better data. And the other ones are kind of more features that I think would really sort of bring this to life because at the moment the way the engine looks, even though it has got all this semantic data being fed into it, it's not really using it. It's not quite there yet. So the things are sort of some weather display, rendering based on the time of day, rendering depending on time of year. And the last two in particular I think are quite interesting. So the first one is procedural cities. So imagine that just like with the trees, you know where you have a city, but rather than streaming the data of that city every single building, you generate that city using some kind of algorithm, which makes a sort of realish looking city, but not the actual city itself. It just kind of pops up instantly. So it's all done on the client. And for certain situations, this might actually be better or as good as actually down in the whole city, which is just unfeasible, especially in the kind of the near term. The last one, algorithmic botany is sort of the same kind of thing, but in the sort of nature domain. So here you can design algorithms which take, well, geometry and define and just generate trees out of kind of thin air. So you don't actually have to have these as 3D models that you stream down to your client. You can just say, I want to have some fur trees, say, and it says, okay, I know what kind of characteristics that tree has. And then it just builds those up. And in this way, you can build up an entire forest without actually streaming it down all the data for it. And it doesn't so much matter that the trees aren't exactly right. I mean, the end of the day, a tree is a tree. You yourself don't really care in a lot of scenarios if it's exact tree that you have in the middle of the Amazon forest four miles from the road. Okay. So finally, I'd like to talk a little bit about some use cases. Again, this is quite an open list. I'd like to hear from you guys what you think use cases, especially if you think there is no use case. And I should stop doing it straight away. So the first one is the train locations. We need to finish that at some point. Another one is national parks that kind of rendering the footpaths or the nature there just to give people an idea of what that location looks like. Then there's data visualization. So I'm not entirely sure if this sort of 3D landscape helps you here, but it's going to adjust the idea. So you'd have to see if it worked. And then finally gains like flight simulations or real-time strategies. Cool. Well, that's about it. This is TeraJS. I hope you enjoyed it. Do you have any questions? Just want to ask where you get the data from? NASA, basically. It's a short answer. Short question, short answer. Well, not quite. So there's longer answer. So you get some of the data from NASA. There's been a couple of missions. One of them doing satellite imagery. One of them doing the land, sorry, the elevation data. So that comes from a data set called SRTM. I actually flew about 10 years ago. There's been better kind of data collection since then. And then people have taken this data and processes to work out the land cover. So the land cover is kind of a derived data set from these other ones. And also I was trying quite hard when I was building this, not to take data which was restricted. So all the data that I used to generate this is kind of out there in the public domain and can be used for commercial work. Thanks. Is the question on the foot right here? Yeah, actually my question was exactly the same about the data. So just thank you for a great talk. Thank you. Yeah, I mean what I was trying to focus on with the data is to have something to display. I mean you could plug in any data to this. It's just JPEGs and all that sort of stuff. To me it's more interesting building the actual rendering engine and the data is just kind of a necessary evil. It's quite a big necessary evil I can tell you. So in the future I'd hope that some people who prefer doing that kind of back end stuff, if they wanted to sort of collaborate on this, would work on that side of things. You got a final question? Anywhere? I can't see any. No, brilliant. Thank you very much for your time. Thank you very much.