 So, okay, a few seconds. So, before I start, I would like to ask you how many of you are graphic programmers? Cool. How many of you are programmers? Nice. I just prepared my talk and I expected that there won't be a lot of graphic programmers, so I think I have done my goal. So, I tried to explain how our graphics pipeline works without much graphics programming details. So, if you want to hear something about 3D linear math or related algorithms, there won't be that such slides. I have a few seconds before my team member helps me with the cable. By the way, do you know ZLED? Who knows ZLED? Nice. It's the game. You can play it if you want. There's no signal. Yeah, because it's normal because I'm waiting for Stan. Actually, I have 45 minute talk, but it won't be 45 because I plan to have something like 15 delay because of such strange accident that usually happens with presentations like with cables. So, I hope you won't be so boring. Graphics, you mean, programmers? Or... Images, pieces of work that... Actually, a lot of ZLED is 3D game. So, we need to have models, 3D models. We have to cover them with textures, with normal maps. We have to render UI. So, we need some art stuff for UI. Buttons, checkbox, scroll box, and other stuff. So, I think our compressed data is about art, it's about 2 gigabytes. It includes meshes, textures, materials. I think I can call shaders art stuff as well because it's something like math art. So... Yeah. It was... Lift. Yeah. That's nice. I'm going to talk about ZLED, how it graphics pipeline works. My name is Vladislav. I'm a graphic programmer at 2.5 Ames. We are open source community that developed the game for a long time, about 20 years, and it wasn't open source for the whole time. It became open source only 10 years ago in 2009. So, what I'm going to tell about, what is ZLED at all, what we have behind visual stuff, what players see, and most of my talk about pipeline, so how it works. I will show you how it works, how it looks like visually, and how it works for GPU side. And I will talk about debugging, what we use, I use our members contributors that help us to develop our game. And very important topic about artists because it's really interesting and I think artists are really important for games because it is that how it looks like for players. So, basically, ZLED is real-time strategy. It's typical skin shot from the game, like this is one is civil center when you start with. It's about ZLED from 500 BC to up to 1 AC. Player can control about 17 civilizations. Each civilization has different type of buildings, different type of units. All these units are visually different. Also, each unit, each building has different variation. What does it mean? It means that an artist can add an option in object definition. We store it in XML files. You can switch between different textures, between different measures, between different sounds. How does it look like for a player? When he plays a building, for example, just a simple house, random generator, generate a seed for new mesh, for new object, and by this seed game, we'll choose a mesh, a texture, a sound. You might have variation of variations. It helps to artist a lot, but that's the problem for graphics programmers because you need to render all this stuff. If you try to render a lot of objects with different textures, with different measures, for example, if all objects on visible scene are different, it's pretty hard. It needs to switch textures on GPU. You need to set measures on GPU. All this stuff slows down the process. Either way, if you need to draw only, for example, the same object multiple times, it's the best what can GPU do because it's mostly about parallel processes. So, a player can learn about each civilization, so it's absolutely historically based. They don't have any, at least at the moment, we don't have any imagined civilizations. They all are real, and they are all based on real historic facts, and all this stuff player can read in our game, so it's a bit educational. Beside main objects, 0D has flora and fauna, so different animals, different trees, different grass that you need to render. The same behavior for variations is applicable for trees and for grass, for example. So, when you work in our editor, Atlas, it's called, you place a tree, it's on each place, it generates a seed, and you place on each click on the terrain, you place a new tree. I mean, it looks differently, it depends on random generator, so it also helps to artists to not switch between different models, between different meshes, so you just put trees, for example, you might have one biological kind of tree, you select it by mesh, or by list, and then you just click on the terrain, and on each click, it will generate, for example, different meshes, for example, they will be different by a height, so small or big trees, that helps a lot to artists, but not for programmers. So, what's about technical side? Important for graphics. We have an entity component system who knows, what is it? The main idea of the entity component system that you don't use classical approach when you have a basic object like iObject, and you derive all your object from it. Idea that you have entity, and you can attach any possible components to that entity, for example, if you want to place an object, the entity on a terrain, you need to know a position, so you have ICMP position component. If you want to render it, for example, it would be interactive or not, just a result stuff, you add a result action. If you don't need something, you just don't add it. For example, if you want to add a sound, you just create, if it wasn't present before new component, like ICMP sound, and attach to a new entity. So, if you would try to open our maps, XML maps, then you will see a list of entities. Beside that, for each entity, we store a seed that was generated when we placed it to choose the right... What's from this scrolling? Okay, variations. That's what I'm talking about, that you don't need to take care about of different types of the same thing. The engine cares about instead of yourself. That's the problem for programmers, because let's look at such example. You have a box, and you have different textures, like it's wood box, metal box, or, for example, water box. And to render it properly on old jail, an old API, you need to bind texture wood, then draw box. You need to bind texture metal, draw. Mesh was binded before, so we don't need to rebind it. And then we need to bind water texture. It's not so slow as it can be if we rebind the old mesh, but it slows down the rendering process. So, for GPU, especially for modern GPU, it's much better to use things like texture arrays when you bind a lot of textures for the same object, and you will choose it one time in shader code. But it's more about modern API. I will talk about it a bit later. Another important thing for our game that we have a lot of materials, I just can modify different options of materials, so how it works internally. We have shaders, native JLSL or AIB shaders, and they all are combined in materials. So, shaders provides API, because it's not a real API, it's just a list of possible options that you can choose, you can select in XML, in shader XML. And artists can combine different shaders in the same material, and then tweak some options in material that then they will be passed in the code of the shader. So, for you as for an artist, you don't need to care about shaders, you just tweak values in materials. That's our platform ratio. I think it's pretty expected that we have a lot of windows, have a lot of Linux, and have some people on Mac, for example me, just because I don't have any other laptop, but yes, the game works on all these platforms. Also, we can run our game on Raspberry Pi, and we have tried to run it on a ARM, ARM process. So, I mean, it can... There was time when we have run on Android, but we haven't tried for a long time, like for a few years. I think mostly because we have... We don't have a lot of Android developers that can fix stuff that's special exactly for Android, but we support OpenGL years, so we can run on it technically. That's our ratio about vendors. All these statistics is from our game, so when you click on the button Enable Feedback, it sends information about your system. It has a description in privacy, so you can see what to be sent, and one of values is your vendor or GPU. It doesn't matter, because usually AMD and Nvidia mostly don't have a lot of problems. AMD may be more strict about shaders, and they don't have a lot of problems for, for example, Windows, but Intel drivers if you ask some graphic programmer and ask for Intel driver on Windows, you maybe will see an angry person. I can just show an example. Intel driver crash, your application, I mean our application, just because we tried to multiply metrics by a virtual, some special conditions that happened before, so it just crashed, your application, okay. We still have users with really old stuff, like GL-1, it's not so many as, for example, GL-3 or GL-4, but we still have it, so it does mean that it works with GL-1. It means that it's the maximum possible version that supports the hardware, so we still have users who have only GL-1. It means, usually it means fixed pipeline, fixed function pipeline if you remember, okay, but it's more about statistics. So basic stuff, we use our engine, our game engine, it's also responsible for our graphics, it's called pyogenesis, it's written in C++. We use different libraries, but for graphics, more important are SDL. It's responsible for setting GL context, for managing buffer, swap buffers, and related to OS dependent things stuff. We use OpenGL. Basically, we use OpenGL-1 and OpenGL-2 functions, but for some cases, when we detect that user has extension, not in core function, but extension, we can try to use it. So at the moment, we have three ways to render our game, fixed pipeline, it's the first possible version in the world, I mean, it's the first possible way to render when you can tell to your GPU what you have to do, what you want to do with your data. You just tell, like, you have such light, you have such extra, you have such option, render for me please. With shaders, you can write a code that runs completely on GPU side, so you have already control on some stuff where you might add some smart things. So for really old stuff like GL-1, we used fixed pipeline, or some really strange video cards, because not all vendors try to fix old cards. For example, we have a list of different video cards. For example, there was some GeForce from 2006 that have wrong behavior of shaders. So we have to switch to fixed pipeline for that card. But actually, in our plans, we try to remove this at all, because it's pretty old and we have only about 1% users, I think we can force them to update their computers instead of support our old stuff code. So let's talk more about visual stuff, how it looks like. It's a typical frame that user can see in his game. So pretty nice. You have border, you have reflections, how we usually render it. First of all, we need to enumerate all objects that you have in scene, and you need to call them. Why we need to do that? Because we don't want to send to GPU more data than we actually need. So for example, if an object is invisible on any camera, I will talk a bit later about cameras, if it's invisible on some camera, then we don't need to render it, and it saves time. Because for example, usually user have top-down camera, so player doesn't see a lot of stuff, like about only 5% of maps, so we can avoid rendering of 95% of all objects on scene. Render scene for each camera. We have four cameras. Four cameras in max graphics settings. Because what we need to render? We need to render shadow. So who knows what shadow map works? Usually to render shadows, you just can't use some magic functions. What you do? You have a light source. Usually it's a sun. It's directional light. So you have a vector. Now you need to render all objects that are present in invisible scene on 2D texture. It's called depth texture. And then you apply this texture on the final pass to calculate is it your pixel that you're rendering at the moment is occluded by any object. And it is occluded or not, is calculated from depth texture because you have a distance between a light source and nearest occluded object in this depth texture. Reflection and reflection. Camera because reflection might have a different direction. So we need to have different angles of camera. Shadow pass, calculate without any color. So we just need to know transparent or not transparent object. And reflection, reflection, they need colors, but they are different from main camera because when you look at the water, in usual case, you have a much bigger number of objects that you will see by different. And the final pass, you combine all texture rendering in the previous stuff. And now I want to show you post-processing. It's not so interesting. Just like effects like we can apply anti-layers thing. We can apply color grading. We can apply some brightness or contrast tweaking. But it's post-processing, so it's not so important. It how it works in code. We use one interface, shader program, and we have pre-implementation at the moment. So shader program is responsible for do all basic DL stuff like bin textures, like bin mesh, any attributes for your mesh, and then each implementation and each derived class, like pfp, imb, dlsl, have all different code based on GL version and how it should look like. And now more pictures, finally. I hope it would be more interesting. So that's how the frame before looks like from a side. So you have a camera view and that's all objects on scene, not all, but most of them. And you need two columns, so you don't need two radars. And after that, after cooling, you have such objects. You might notice that we have such trained tile. Why we have that? Because actually there is a small intersection here between tile. It's bounding box and camera frustum. So that's why it's rendered. That's okay because it's not performance hit and we can just render it because it's pretty cheap in compare with other objects like tree. For example, this tile, the wall tile has about 32 by 32, about 1,000 triangles. It's pretty equal to each tree. So it's cheap to render such tile. That's how it looks like from a shadow, from shadow side. So you have cold region and now you have to render all objects that can cast a shadow. In the top left corner, you might see a depth texture. White color means first point. The black color is nearest point and depends on a color, color value. Actually it's only gray values. But depending on color value, you can check occluded or not your object. So it's about debugging, but now I want to show you how it looks like I think in real time because it's less boring, I think. So, in steps and current step. So how it looks like. First of all, we need to render shadow map. It's slowed down a lot because actually this time need only about few milliseconds. And you won't notice if I try to play it in real time speed. So how it looks like. Reflection map. So it's actually mirrored about water level. So it has the same position X and Z, but white position is inverted around water position, water level. Now we render refraction. We have red color to detect leaks. For example, if shader tried to pick a color from wrong coordinates, some of you who play may notice sometimes we render terrain. And just iterate through all objects. Important note. We don't try to render in straightforward manner. We sort all objects by types, by materials, by textures because, as I said before, because of variations, we have to minimize number of context switching of state switching. So it's much more efficient to render things when you don't switch different states frequently. That's why we combine them by material, by shader, by textures. So it's not so visible, but they are combined. So let's first talk about material, about debugging. We use different software. Mostly we use epitrace. It's awesome software. It supports most platforms. So it works perfectly on Windows, not all, but most platforms. It works on Mac OS, Linux, and so on. The really good stuff about epitrace is it even supports all GL. For example, if you have old application, for example, old game, you might try to trace it through epitrace. Another one, like RenderDoc, or NVIDIA inside its RenderSpecific, but they have problem with it. For example, RenderDoc can run GL2, GL1. It requires at least GL 3.3. It's okay for modern, but for pretty old software, it's impossible to run. But you might try to debug, for example, some parts. So basically we use epitrace. It looks like epitrace. It's already prepared to race. Usually you have two steps for epitrace. You trace it, you run epitrace, like in command line and your application. And then you got a trace file that you open with epitrace or any other UI built. For example, it's Qt-based. So you can look at all calls that your application has done. It's pretty helpful to understand what's going on, what does the application really complete, why it's helpful, because you don't want to lock all your low-level AP calls in your application, because it's a lot of work. Also, it's not so fast. Also, it might be threaded. So you might spend a lot of time to try to implement stuff. So just try to use epitrace, and that's a good application when you talk. I think it's better in terms of one-time usability, but it doesn't support OGL. Platform vendor-specific is good when you try to debug on this platform, because it has all information. It has all connection with drivers. So it might show you some additional information about your system that you don't know. So it might be useful for writing graphic applications. So the important part, artists. Why it's important to communicate with artists? Because artists create visual style of your game. So programmers have to understand what artist wants to present in your game. And artists have to understand restrictions. For example, you might have on a frame a lot of units. They are pretty small, for example, a few pixels. But artists can create a texture like 2000 by 2000, and it won't be noticed by any player, because it's too small. You have really crazy density of texture, but why do we need that? So you need to talk about that. And I have three problems that I want to mention about this. First of all, transparent object. It's pretty hard stuff about graphics to render a transparent object. We have modern hardware that can allow few tricks. For example, order independent transparency, but it's another history. If we talk about usual application that supports all systems, transparent object is a problem, because you can't just render it. You can't just render it randomly in different places. You need to save the order, because it's important, for example, if you have red, green, blue transparent object, it's really important in which order they are rendered. You could try to move layers in Photoshop or Gimp, and you will notice what's going on there. Another problem is parallax mapping. It's a simulation of displacement, how it works. It's flat, just with a normal map, and how it usually looks in the game. And if you want to add more 3D effect to your surface than you use parallax mapping, what it actually does with your material, it's like a ray trace. You do different steps to find nearest position of intersection with your height map, and then you take not this one with texture and normal map, but you take this one, and you get a correct view of your material. So do you see the difference between them? That one is much more flat, I think. So because of correct position of the object, you will have a better result. And what's the problem? We have, and we have that. That's how it looks like in our game, but with a different mode. You might notice that the roof isn't flat. And what's going on when you have something wrong? This one. So the roof should be like this, but it's like this. Why it's going on? Why it's going on in that way? And it's not really noticeable, but here, another artifact in the same style. So what's the problem? Problem of normals, because usually when you try to trace, you have an assumption that your triangle where you applied the material is flat. But in case you have normals of different vertices in your mesh, you might get interpolated version of your normal. And for example, if you have this pixel, and noble is looking up, but you're something like here, and you're looking here, what's going on? It looks like you try to look at this point from under the object. It shouldn't happen for the original algorithm. So in this case, to fix that, you need to talk with your artist and say that don't use parallax mapping at all for this mesh or that kind of mesh, or just, for example, switch, divide your mesh or more triangles to reduce such effect. And the last one, MIP map. Who knows what MIP map level is? Okay, let's me explain. To save whistle style, and when you have an object on a scene, you might render it on a different distances. If you try to render all objects with the same texture, you might got a flickering, because when you try to render a pixel, the coordinate in the disk texture will be calculated on each frame, depending on a position. The coordinate of such pixel on this texture will be different. Because of that, we will get flickering. To avoid that, we have such technique like MIP mapping. When you have such texture, so original one, this one, and all these squares that are smaller are MIP map levels. For example, it's zero level. It's one, two, three, and so on. So if you have a small object, then GPU will choose not this one texture, but this one. And you won't have a flickering. Why? Because when you calculate a MIP map level, you use average. It means that when you try to pick a color, it will have a smooth transition between coordinates. And you will have a stable color. Let's look at this example in our game. What's going on? This one is near distance, average, and very far. What's going on? You might notice that normal tree, almost normal, looks strange. And what's going on here? So you don't have any leaf on this picture. Why it happens? Because of algorithm how MIP map level calculated. It's just average. And on low level of MIP map, you will get just mostly transparent object, transparent texture. So to fix that, you need to talk with your artist who calculate MIP map levels and tweak patterns that are responsible for MIP map leveling. So we don't have any stable solution for it because each textures can have different behaviors in different situations. So it's important to know about that case and account that and tweak numbers for different textures. That's all for MIP map level. So that's all for now. And we are looking for an equation. Present? Yeah? What part of the pipeline are you currently working on? Are you just fixing problems? No. Actually, at the moment... Okay. The question was about what I'm working at the moment. Working on. Number one, please. Actually, at the moment, I'm working on sorting stuff when I told about material sorting to minimize switches. So at the moment I try to profile different scenes, different objects, and try to minimize time that we spent on this because at the moment we have CPU bound, not GPU. So in our game, in big scene, GPU isn't really loaded about 10% by CPU, one core that is used for sorting about 100%. So that's our goal, I mean, our graphics goal to reduce the time that we spent for sorting and related stuff. Yeah, I have the same question. Sorry? I have the same question. Yeah? Do we have plan about metal? We have thoughts about that. What we think? I think and we think that it might be more useful for us to use Vulkan and use something like molten VK for metal because we are open source. It means that we usually don't have a lot of hardware, a lot of graphics programmers who can spend time exactly on this API. So for us it's much better to use some small amount of API set of API. Okay? Thank you.