 Welcome everyone. We'll have a talk by Roberto de Yoris. He will talk about Python using modern OpenGL and we'll have about 15 minutes for questions at the end. So please welcome Roberto. So, hi everyone, I'm Roberto. Some of you may know me for the Uwiss-Giou project. I'm the lead developer of it and some of the open source project. While I'm heavily involved in networking programming, my true passion is game development. How many of you are into game development? How many of you want to become game developers? Oh, yeah. During my programming life, I wrote about 12 game engine in various languages, two full games and since 2014, I'm a teacher at AIB, that is Italian video game academy, where I'm a programming and computer graphics teacher. Today, I would like to show you how to use OpenGL for making your games in Python. First of all, why you need OpenGL or DirectX, Metal, Vulkan for writing your games. Those are APIs for your graphics cards. Hardware vendors invest thousands or billions of money in their technologies, especially now that GPUs are the bad words in the hardware world. Every vendor has its internal way of programming their hardware. We need an abstraction from the driver to the way we speak with the graphics hardware. OpenGL, DirectX, Metal, Vulkan are all libraries allowing us to interface with the driver of the graphics card and to the graphics card by itself. DirectX, Metal and Vulkan are all equivalent to OpenGL. They do almost the same thing. DirectX is the Windows Microsoft implementation of OpenGL. Metal is the Apple implementation. It's pretty new. It's from 2014 or 2015, I don't remember. And Vulkan is the latest, is the newcomer in the API world. It should be an evolution on OpenGL. We will talk about Vulkan in the last part of the talk. Why choosing OpenGL? It is an industry standard, drafted in 1991-1992 by Silicon Graphics. I hope you know them. They are not called SGI. The OpenGL now is controlled by a no-profit organization that is Kronos Group. They are at version 4.5 of the standard. There is a lot of documentation because being an industry standard, this is something you can learn at the university. And most important, portability. Basically, OpenGL is available on all of the operating systems out there, from Android, Linux, Mac OS, Windows, and so on. I would like to start with a disclaimer. Those libraries, APIs, are heavily C-oriented, seen in the mean of the language. And performance and compatibility are the main objectives, so do not expect a lot of Python-ish way of programming. What we need to do to draw on the screen of our computer? This is the optimal way. First of all, we have to ask our operating system for something drawable. It could be a window or the whole screen. We have to agree with the graphics card on how to represent a pixel. We could use 3 bytes for a pixel, holding 8 bits for each color component, RGB. RGBA, if you want some kind of alpha for transparency or for some custom data. We could go black and white, so we could use a single bit for represent a pixel. It could be black for 0, white for 1. Once we agree with the graphics card on how to represent a pixel, we have to allocate a memory chunk for our canvas, something we can draw on. It could be an array of bytes with the size of our window, of our screen, 640, 480 for a standard old-style resolution. Multiplied battery, because we need 3 bytes if we choose to use the RGB pixel format. Once we have this chunk of memory, we can start to write in pixel data that are basically the color we want to write on the screen. Into this memory, we just allocate. Finally, we have to transfer the whole chunk of memory to the graphics card. Unfortunately, the optimal way sucks. It is really slow. We need a lot of memory. Think about to know what is. We have full HD games. We need 1,920 for 1080 for RGBA bytes and a modern game and a modern player expect a game running at 60 frames per second. So you have to multiply that big value for 60 per second. Imagine the load on the hardware bus. But we have game by 40 years. How do old programmers solve the issues? First of all, we were in the 2D era. There is no third dimension. First week was using time maps. We divided the whole game world in blocks. We built a big grid of our game, something like a puzzle. We had dedicated processor hardware dedicated to moving things on the screen. We have very few colors. If I remember correctly, a NES, Nintendo, has 16 colors. Maximum of 16 colors for the whole screen. Very low resolution in the order of 320 for 2040. Even lower. There were hacks all over the place and forget about real-time 3D. This is Super Mario for the NES. You should check how you can see a repeating pattern for the blocks. I don't know if some of you have played this game. It's pretty modern. It's for Nintendo Wii U. Basically it is a game allowing you to build your Super Mario. Dragging blocks on the screen like we did in the whole days. So you have your pieces and plays on the grid on the map. This approach was good for more than 10 years. Then in the start of the 90s, thanks to Carmack, Romero and those funny and powerful developers, they started investigating 3D support. The first approach in 3D was using the raycasting technique. It is really simple and really fun. Basically they divided the whole game world in vertical slices. One slice for each vertical pixel. So if you have a resolution of 200 pixels, you get 200 slices. Then from the point of view of the player, from the virtual camera, they shoot a ray and the longest the ray is, the littler will be the drawing line. Check the figure on the right. This is what you can get with all this time tree. Unfortunately, raycasting is not true 3D. We have basically no way to put walls on a higher plane or have thinner walls. What you can see on the right is actually what you get on the final image. All the game is divided in vertical slices. The wall on the left is nearer to the viewer, so it's higher. 20 years ago, the consumer world started getting accelerated hardware for gaming through the effects to do. We had the first integration libraries, Glide MiniGL. They are sort of embryonal OpenGL implementations. Allowing the developers to use the hardware acceleration for having a real-time 3D. In 1998, we got Unreal from Epic Games. We started getting a real 3D. We can move on the its own axis, z-axis, x-axis freely in our game. Nowadays, we have a really powerful hardware. Most of the research in the hardware area are done on the GPUs, on our graphics cards. So we have a lot of hardware power. We are basically in a duopoly between NVIDIA and AMD. The most important thing is that nowadays our graphics cards are programmable. We can choose which kind of code they will run. This is Babbo, a game from 2015. If you haven't played the game and if you don't know it is a game, you hardly tell this is not a photo shot, but a real-time game. This is for PlayStation 4. Trust me, PlayStation 4 is not the most powerful hardware you can find around. Maybe it is near to an entry-level PC you can buy for a few hundred dollars. Even if it's a pretty cheap hardware, it can run something like that. What is 3D graphics? How we can accomplish this? From a mathematical point of view, 3D graphics are pretty near to industrial design. Basically, we define each shape of the world in the form of polygons, maybe triangles. Each triangle is defined by three vertices. Once you have defined those shapes, you have to figure out applying projection. It's a very simple math. Well, you are figuring your brain, not your eyes. Once you have built the geometry of your game, you have to rasterize polygons, accounting lighting and texture. Rasterization could be unknown to you, but it's a really simple procedure. You have a triangle made of three vertices. Rasterization is the process of dividing the triangle in horizontal slices and filling them pixel by pixel. This is Leon from Resident Evil 4 by Capcom. You will see what the 3D artist will give you. It will give you a bunch of triangles composed by vertices. Then you rasterize them, the image in the center. Then you apply a texture on it. Don't be worried. The rasterization does not end by itself in that kind of images because it has a depth. The central image gives you this sense of depth because we have taken into accounting lighting. The final step is texturing. What is a texture? It's basically a 2D image that we place around our model. Imagine like placing a paper around a box. This is another example, as I said, created from Ubisoft. On the left we have the geometry. On the right we have the final result. Time to see some code. Let's do this with Python. First of all, since four years ago, the logic of a game engine is basically the same. We have a central loop. Imagine it's a while through, a while window is open. In each cycle you clear the screen, manage the input, update your game logic, this object move there, redraw everything back to one. The first thing we have to do is asking our operating system for something drawable. We have a bunch of solutions available like GLAT, GLF View, PGLAT. In AIV we have developed a bunch of abstractions for allowing students to start using OpenGL as soon as possible. These libraries are open source. I will give you the GitHub account at the end of the talk. With the window class we ask the operating system for a drawable context. Then we enter our game loop. It will run until the window is opened. And we ask the graphics card to redraw everything it has in its color buffer. Color buffer is basically the representation of each pixel on your screen. Prepare your eyes for some noise. Basically the graphics card is drawing what it found in its memory. But we haven't put anything in its memory, so that memory is full of garbage. This could be even a security problem if you think about it. Now we have a graphic context we can start talking with our graphics card. The first thing we want to do is clearing our color buffer. As I said earlier, the optimal approach of having a local representation of the memory on the graphics card on the screen is really slow. One of the first instructions you have to deal with when programming graphics card is the possibility to fast clear what you see on the screen. This is one of the first functions exposed by OpenGL. To use OpenGL in Python, you have various solutions. You have PyGlet, PyGame, PyOpenGL. Personally, I use PyOpenGL because it does not offer a Python abstraction over OpenGL, so it is a good way to learn how OpenGL works. You don't need to compile anything. PyOpenGL uses Ctyes and Doompy for all the array and byte array representation. Let's change our code. First of all, we include all of the OpenGL functions. OpenGL is composed by C function prefixed with the GL string, and they have no structure. They only work on primary data, float, integer, nothing more. It is very easy to use Ctyes for implementing OpenGL. Now, at every refresh of our screen and at every cycle of our game engine, we send to the graphics card, please, clear the color buffer using the GL Clear commands. Now, we have a black screen because at every redraw of the screen, the GL Clear command is filling our color buffer with zero. We can tell OpenGL to clear the color buffer with a specific color. We will fill it with zero. This is a red. The last items is the alpha channel. We leave it as one, so it's fully colored. Taking account that while in HTML you could be used to colors starting from zero to 255, in OpenGL, generally in 3D graphics, we use normalized value, so colors go from zero to one. So 128 will be 0.5, and we have our red clearing. Now, we want to start drawing something. Before drawing something with OpenGL, you need an OpenGL context, and we already got it using the window class. We need a bit of computer graphics theory, nothing extreme, and we need to know GLSL. Unfortunately, the last part is the hardest one. GLSL is the programming language we use to program our graphics card. It is a pseudo-C, it is a dialect of C, it is not very hard by itself, but it requires you to know what to do with your graphics card. Once we have our tool, we have to agree on conventions. There are a lot of conventions in 3D words, in computer graphics world, in desktop design, and so on. The first convention in OpenGL is that all is right-handed. Right-handed means the epsilon goes on top, the x goes over right, and the z goes over the viewer. So z1 means over me. 0, 0, 0 is the center of the world, and in OpenGL we use column-major matrices. Do not worry if you do not know or do not remember matrices, they are pretty easy, we will cover them soon. Why this talk is called modern OpenGL? Because till 7, 8 years ago, we were still talking about the static pipeline. You continuously tell the graphics card, please draw these vertex at this position. Draw another vertex at this position, draw a triangle, and so on. There was a continuous transfer of data between your application and the graphics card. A lot of algorithms or computer graphics were coded into the graphics card, so you could not implement new algorithms. Think about the first PlayStation, PSX, you were able to use a single way of implementing lights. But they were really easy to start with the old OpenGL. At the first tutorial, you were already able to draw a cube. Now, after three tutorials, you can barely draw a triangle. Which is the difference? What is modern OpenGL? First of all, graphics card became really powerful with a lot of dedicated memory. You do not continuously send them vertices, but you send a big bunch of vertices on the beginning of your game and then tell the graphics card, do this with this buffer of vertices. The only algorithm you will find hard-coded into the card is the rasterization via interpolation. Nothing more. All the rest you have to implement with GLSL. As it is a programming hardware, we could start talking about GPU. They are really GPU. Unfortunately, as I already said, they have a modern OpenGL as a really high learning curve. The shortstopper for newcomer is this GLSL. It's sent for OpenGL sharing language. It is not hard by itself, but you have to know how to use it. So, to draw something with modern OpenGL, you need to do those steps. First of all, define a vertex array object. Think about it as a way for shader to assess stream of values. Remember, shaders are code that is run by your graphics card. Once we define this vertex array object, we need to start uploading data in our graphics card. Data could be vertices, could be textures, could be normals. Whatever is required by your object to be drawn. Then we need to code and upload into the graphics card our shaders. Shaders are compiled by your graphics driver. Then we issue a draw call. We tell the graphics card to start executing its shaders. The draw call is called a devry game cycle. Once we issue the draw call, the vertex shader, the code wrote for the vertex, is executed for each vertex. If our mesh, if our model is composed by 1000 of vertices, the graphics card we will call 1000 times the vertex shader. Every time we get three vertices, we have a triangle, and so the graphics card can start the rasterization. The rasterization will generate a lot of pixels. For each pixel it will call the fragment shader, another shader, so we will call a bunch of code for every single pixel. This is a huge amount of load for the graphics card. Why it works well? Because GPUs are engineered for being extremely multi-core. There are GPUs that have 500 score and even more. All of these stacks are heavily parallelized. First step, we want to create the bare vertex array object. Bind it. Binding is the way for OpenGL to tell the graphics card from now on this is the value you have to use. You can generate a lot of vertex array objects in your graphics card memory, but we have continuously tell the graphics card which one to use. You do this with the binding. Then we create a vertex buffer object for our vertices. We upload the vertices to the GPU, and we tell the vertex array that is first item will be the just uploaded Vubio. Defining a triangle, I suppose it is a simple task. A triangle is composed by three vertices that could be represented as three vectors, X, Y and Z. Vector 3 or Vector 2 if you are in 3D or in 3D. The final step is how many data you need representing a triangle. Once we have uploaded our data to the graphics card, we can start implementing shaders. We write them in this pseudo-C. We compile them and we upload them in the graphics card. Let's start by drawing a simple triangle. Nothing new here. We set the clearing color as red as before. Then we ask the graphics card to allocate one vertex array. This one here is for telling the API allocate one. This is clearly an hardware engineer's choice. No novice will implement an API in this way. Basically, it is an optimization for allowing you to generate a bunch of vertex arrays in a single call to the graphics card. Once we create the vertex array, we bind it, so we tell the graphics card from now on everything we map to a vertex array is referred to this vertex array. Then, in the same way, we generate a buffer for holding our vertices. We bind the buffer so everything we write into a buffer will be referred to this buffer. We build using NumPy an array of vertex 2. We are drawing a two-dimensional triangle. These are three vertices. With this function, glBufferData, we upload those data in the graphics card. With this code, we map the buffer to the index 0 of the vertex array. So the shader, we will get hashes to this buffer. Then we start writing our shader. This is the vertex shader. This is the code called for every single vertex. This code expects to set a glPosition vector 4. In 3D graphics programming, we use vector 4 instead of vector 3. This is a mathematical trick for allowing easy combination with four-by-four matrices. We will see that later during the talk. In this code, we inform the shader that he will get a glPosition 0, the first index of the vertex array, vector 2 structures. So basically it will get two float for each vertex. Then in our main function, we set glPosition, transforming vector 2 that is vertex to a vector 4 because the glPosition expects a vector 4, adding the z component to 1, and the w in vector 4, the fourth element, is the w by convention, to 1. If you're asking yourself what is w, by convention, when w is 1, it is referring to a position. When it is 0, it's referring to a direction. In this case, our vertex is a position in the space, so we set it to 1. This is the vertex shader. And this is the fragment shader. Here we decide which color our pixel will have. In this case, we set the color to magenta, I suppose, some kind of pink. With this code, we compile the shader and link them to a program. We bind the current program, so the graphics card will know which code to run at every draw of the screen. And finally, we issue our draw call. We are asking our graphics card to draw triangles, so every three vertices, it will start the rasterization. Okay. Pinky triangle over a red background. Our graphics card has basically a single algorithm encoded in its GPU, and it is the interpolation for doing rasterization. Basically, everything goes out from the vertex shader, is interpolated. So if we tell the shader that in addition to GL position, we spit out another value, that value will be interpolated. In this case, I'm using the triangle vertices like color. So if the vertex of the triangle is in position 1, 0, maybe it will be mapped to a red. I generate this value, and I tell in the fragment shader to use that value as the color. The result is that from one vertex to another, we will get a blending between colors, automatically done by the graphics card. Why this triangle is draw in that way? Because when you start a context in OpenGL, you get a screen, large two units. Do not worry about how big is a unit in OpenGL. From a mathematical point of view, it's not relevant. You choose what is a unit. It could be a meter, a centimeter, what do you want, a kilometer, if you're developing a solar system. But if I start with a width from minus 1 to 1, an 8 from minus 1 to 1, and a forward from 1 to minus 1 is the inverse. So we are basically into a cube. In fact, our triangle fill the whole screen because we are going from 1 to minus 1 and so on. Now in this, we can start toying, playing with our shader. We call the MECA little triangle. I divide it by two, its dimension. It's the graphics card that is dividing the vertices. The original data is still the same I send in. We could move it horizontally. We are adding two back to two. So the triangle move to right. So with these two examples, you have learned two things. First, you can scale your geometry using a multiplication or a division, and you can move your geometry using simple addition or subtraction. Now we want to use the third dimension. We want to give depth to our models. Before going on, we have to introduce the camera paradox. While you are playing your game, there is always the concept of camera. You are basically an operator with a camera in your hands following the player continuously. There is no camera in your game. It is a paradox. You do not move the camera, but you move the word. Imagine you are on top of a mountain with your camera, and you want to take a shot of another mountain in front of you. If you move the camera, the photo will result in the mountain moved. We do not have a camera in our game. We are the camera. So we need to move the word. So if the camera moves to the left, the result is moving the word to the right, and so on. The camera does not exist. The camera is simply a math formula. When OpenGL starts, we are in a cube of that dimension. But for faking our eyes, our brain, we need to modify the vertex position to give the idea of prospecting to the viewer. Basically, we need to transform each vertex from that box to a frustum. A frustum is a pyramid without the top. It is very high school math. This is a triangle comparison. Basically, we need to move the point xy, yd, zd, over the black line. So to have something really 3D, we need to transform our vertices. And we want our graphics card to do the transformation. The first transformation is moving from local to world. When we first drawn the triangle, it was in its local coordinates. Then we moved to the right. They are its world coordinates. We placed an object and moved around the world. Then we need to transform this object position in respect to the camera. If the camera moved to the right, we need to move again this object to the left. Finally, we need to transform each of these vertices and project it using this formula. Go again with the next vertex. Matrices. All of those transformations in OpenGL are done with matrices. You can do transformation manually, as we did before scaling and moving the vertices. But the best approach is using matrixes. Don't worry, they are pretty easy. Basically, you build a 4x4 array with specific data and a specific layout, multiplying those matrix by your vector, will generate another vector, transformed by these matrix. On the left, we have a translation matrix. A matrix used for moving objects around the world. A translation matrix is pretty easy. It has all one over its diagonal. Then on the X, Y, Z, on the right, specify how much you have to move the object. The result will be a new vector moved in that direction. Scale matrix. We know scaling is a simple multiplication of the vertices. This matrix allows you to transform and scale. Your models will be bigger or thinner. Then we have rotation. We can rotate our model into the world. Rotation is a bit more complicated. But the rule is always the same. We have a vector 4 and we multiply it for a 4x4 matrix. Matrixes can be combined together, and this is why it is the preferred way for making transformation in OpenGL. We can build a single 4x4 matrix with all the transformation we need. A single matrix can represent a rotation, a translation, and a scale. And finally, we are able to draw something. Back to code. This time I directly start with the shader, because this is the interesting part. Uniform are variables we can set in our game loop. We can constantly change their value without needing to re-upload or recompile the shader. We have two matrices in this simulation. One for the word transformation coordinates, because we want to rotate our geometry. A matrix representing the camera. So a matrix could represent a translation on the x-axis and so on. We generate a color, and we generate a new vertex multiplying the original vertex by word, matrix, and by camera matrix. The result will be something in perspective. The fragment shader is basically the same as before. We set the colors to the vertex value, something pretty random. Vertex array generation. We upload... This time we are drawing a cube. So we have six phases. Each phase is composed by two triangles. So we need to upload a good amount of vertices. A triangle for front, two triangles for front, two triangles for back. Right, left, top, down, and so on. This is a cube. Into our game loop, we start managing input. It is not the topic of this talk, but input is pretty easy. Is this key pressed? Yes. Do something. And this is... What we do. We generate a matrix. This is a simple Python class. We translate the five units far by the viewer. We rotate by the amount the user shows with its keyboard. And we leave a scale of 111. So we do not change the scale. Then we generate the camera matrix that is a composition of our perspective matrix. A matrix representing a perspective projection. And a camera matrix. We are moving our camera five units far from the object. And in this way, we upload the two matrices on the graphics card. And we issue the draw call. Be prepared for another noise. Visual noise. This is our cube. We start rotating it, but you should be faked. Something is wrong. What's happening? Basically, you are experiencing Z-fighting. Pixels are fighting for being the most near to the camera. This is not a good solution. We want the pixel nearest to the camera to be the frontmost. How we can solve this? This is something we can delegate to the graphics hardware. Because in addition to the color buffer, the representation of your pixel in the graphics card memory, you have the depth buffer. It's another buffer with the same size of your graphic context in which you store for each pixel its Z value. The value with the lower Z will win over the value with higher Z. To implement this, we have to enable the GL depth test. And for every draw cycle, we have to clear the color buffer and the depth buffer. Now, our cube should be perfect. I'm rotating it with my keyboard. So, we have seen how to implement perspective, how to fix Z-fighting, but cubes are boring. We want something better. There are really a lot of file formats for 3D models. One of the oldest and most easy to use is the OBJ format. The most used in the game industry is FBX from Autodesk. The advantage of OBJ is it is a text file format. It is basically a list of all the vertices you want to draw. Our code is the same of the cube one. Exactly the same. We have all imported our OBJ loader. This is basically a text parser that will generate an array, a NumPy array of float of those vertices. Here, we open the file and we upload its vertices to the graphics card. Our stone trooper. This is a rasterized stone trooper. There is still not lighting in place, but you can check its silhouette. You can even check the interpolation between colors. These are fake colors. I think it is better to set them to a fixed color. A white should be... Oops, we do not need out color anymore. This is our white stone trooper. What is lighting? Lighting means adding lights management to your model. First of all, forget about accuracy. Lighting in the physics world is completely different and too much complex to implement for an actual computer hardware. There are algorithms that are pretty near to how lighting works in the physics world. They are ray tracing and path tracing, but they are absolutely not usable for real time. Then, we need to absolutely sacrifice indirect lighting. Indirect lighting is the bouncing of lights that generate phenomena like color bleeding. So this desk gets additional colors from the light reflection. This is a bit of black because my shirt bounces over the desk and so on. They give a lot of realism to the scene, but unfortunately they are really slow to implement. Before checking out light, I implemented it into real time game. We need to introduce normals. In the OBJ file we showed you before, in addition to vertices, there are another bunch of vertices defining normals. For these vertex, normals define the direction the vertex is facing. Imagine a vertex over my head. It's normal, it will be on top of my head going over the epsilon axis. How we implement shading. Shading comes from the artistic way of speaking because we generate shades for our meshes. There are various approaches. First is flat shading, pretty old. You can see it on Super Mario for Nintendo 64. So you are clearly able to see each single polygon because every polygon has a different color. Then we have gold shading that is a bit more realistic. This is the hardware encoded in the PSX graphics card, for example. Then we have font shading that is basically the most useful algorithm nowadays. The current password is PBR that stands for Physically-Based Rendering, where we took some of the physical property of a material and we report them in the way we like materials. All of those algorithms, there are more, but this is the foremost used in the gaming world, are based on this simple rule, the Lambert Diffuse Lighting Formula. Basically, Mr. Lambert says that the more the cosine generated by the direction from the light to the vertex, between its normal, the higher is the cosine, the pixel will be more enlightened. Let's see how we could use it. In addition to vertices, now we need to upload in the graphics card even the normals of each vertices. Our vertex shader will spit out the Lambert value. The Lambert value is a value between 0 and 1. The cosine is a radical function. It goes from minus 1 to 1 and so on. We are not interested in the values between minus 1 and 0, because from the light point of view, they are not shaded. First, we place a light in our scene. We place a light on 10 epsilon. Then we compute the light direction. This is a simple formula. It is the normalized value of the subtraction between the destination, that is our vertex, and the start, that is the position of the light. Then we take the normal that, remember, the normal you find in the OBJ file is not in word coordinates. In local coordinates, if we move the object, we need to move even its normal. We multiply the normal for the word matrix and we compute the Lambert. Fortunately, in linear algebra, we have the dot product operation that basically returns the cosine between two vectors. In a single shot, we can compute the Lambert factor, but we do not want value lower than 0. We use the max function to discard all of the values lower than 0. In the fragment shader, we still use the white color, but we multiply it for the Lambert factor. It is, I suppose, a really simple math, but the result is pretty astonishing. This is the result. It is the same code as before, but adding the Lambert value, we get that, we got that, we get that for our model. Obviously, we can move the light, we could place it under the feet of the stormtrooper, and you get the light from bottom to up. Texturing. Texturing is managed again by the graphics card. You upload the texture, they are images, and you tell the graphics card to place them around your model. This is a texture for our stormtrooper. This is a standard two-dimensional image. The code is again the same, but we need another bunch of data. These are the UV coordinates. Why they are called UV? The official answer is because X and Y are already taken. U and V are the anchor point of each vertex. Basically, they tell which pixel of the texture to attach to a specific vertex. Our vertex shader remains the same. We only pass the UV coordinates to the fragment shader, and we stop using plain white. Instead, we use the pixel contained at the UV position of the texture. The result is the stormtrooper with its texture. You can see black globes, the image over the belt. We are not far from what you can find in a modern game. Is it enough for a game? First of all, if you want to go AAA, so you want to build a game for the masses, think about it twice, because nowadays AAA game is like an Hollywood production. You need a lot of money, a big team, and so on. If you want to go that way, you will need to start adding features to your game engine. First of all, you have to start baking lights. Computing lights, especially for not moving objects, is a waste of resources. So you pre-compute light and apply them as textures on your models. In the same way you can pre-compute shadows. We haven't seen how to build shadows in OpenGL, but it's not so complex. And pre-compute indirect lighting. In this case, we use algorithms like ray tracing, path tracing, which will require a lot of time, to build another texture containing the reflection of the objects around the screen. You can start implementing skeletal meshing, so you apply a virtual skeleton to each of your models, and you animate that model moving the bones. You could use the stencil buffer that is implemented by your graphics card and allow you to store a custom value in a buffer for each pixel. You can use it for volumetric shadows, volumetric clouds. There are a lot of using. Instancing. Instancing allows you to upload a single model to the graphics card and draw it thousands of times with zero cost. There are a lot of things you can add to your game engine. Video games are games. This is my suggestion. They must be fun. Being beautiful is optional. We have been playing games for over 40 days. After 40 years, we still enjoy playing games. What about Vulkan? Vulkan is another standard, a new standard managed by the Kronos group. The same that is managing OpenGL. Vulkan is a really low-level library, low-level API. It is so low-level that the Kronos team told developers to start building layer and Vulkan because Vulkan is too much complex. So very probably you will never start developing with Vulkan, but you will continue developing with OpenGL as a bridge to Vulkan. This is a bunch of useful links. The last one is the repository of AIV didactical libraries. You will find the Python code there. I strongly suggest you to check the link on the center. It describes how games like Vulkan's time 3D or DOM are implemented. They show you the code to implement the re-casting. Questions? Thank you, Roberto. So if you have questions, don't hesitate. Wait for the microphone so that we hear you on the recording. Hi, thanks for the talk. Do you know if there are implementations of the Vulkan API, like the OpenGL implementation, oriented through C-types? Frankly, I'm not aware of that, but I don't think there are already implementation because Intel released their Vulkan implementation for Intel hardware a couple of months ago, and it's still unstable and with the research topic. So we need to still wait a bit before having something usable. We've got more of a question about what it's like in the industry and all. How often is Python actually used for this sort of gaming and graphics thing? And one of the reasons why I'm asking is I think I saw something from Unity on your desktop as well. Well, this is a good question with a set answer. It is very unlikely you will be able to build a whole game only with Python. But from what I can tell you what I've seen as a teacher in the Italian Video Game Academy, Python has a great value. We teach OpenGL with Python because Python teaches programmers, students to code well in a way your co-worker or other people can understand what you're coding. And this is a skill very few hardcore game developer have. And this is truly a problem in the industry. Approaches that are well established in other IT industries like testing, agile methodologies are completely misused in the gaming world. So it is very important for us to teach this lonely thing with Python. In addition to this, building a game is not only building the graphics engine. But probably you will start building a lot of tools for artists. You can use absolutely Python. Disney is one of those companies using Python for building tools for artists. You can use Python for the networking part. Not only for the multiplayer networking but also for services. You can build a web API for your game. Every game now has the need for a web API and so on. Unfortunately, we cannot expect to be able to write a whole game with Python. Especially because now we have things like Unity 3D, a real engine that are really, really powerful. I will not start building a game from scratch if not for learning how to write games when I have those powerful tools. Python is a good companion for those tools. Any other questions? Yes. Nowadays you hear quite more fuss about direct text in OpenGL. What would you say are the pros and cons today besides Microsoft advertisement? Well, I'm a little bit biased because I'm not a big fan of any Microsoft technology. Currently DirectX 12 is very near to welcome as an approach to programming. In fact, the vast majority of gamers feel on DirectX 11 because 12 is too much difficult for them. Microsoft is investing a lot in DirectX, but frankly making a comparison between the two by me would be a bit biased. I prefer OpenGL because the right one deploy to everywhere. Thank you. So if there are no more questions, I have one question. What does Vulkan brings in comparison to OpenGL? Sorry? How does Vulkan compares to OpenGL? It's really, really low level. You have to allocate memory manually on your graphics card. Remember to free it. You upload on the graphics card a binary representation of the opcodes of the GLSL language is really, really low level. Something more interesting for driver developers than for game developers. For driver developers? Driver? It's more interesting for driver developers. Right. Does that mean that it's a bit similar to technologies like OpenCL and CUDA? Not at that level. Oh, well, maybe OpenCL is too much high level with respect to Vulkan. And using OpenGL to make presentations showing up things that are not games, have you seen that being used with Python? Yeah, absolutely. We work with the Monza racing team for making simulators for their drivers. So we use OpenGL right there and a lot of that parts are made with Python. Any more questions? It seems like you have more opinions I was wondering if you could maybe explain whether they're used and if they aren't used, why not? Well fun here. I prefer fun shading over PBR now because PBR requires your whole graphics team to use that approach to building textures, choosing materials and so on. Personally every time I need to interact too much with artists something goes wrong always. With the fun shading I'm a bit more responsible of the final result of the game. In addition to this implementing fun shading is a matter of five lines of GLSL. Implementing physical rendering is about 200 lines of pretty unreadable code. In addition to this with fun shading you need to implement diffuse lighting with Lambert. You need to implement ambient lighting that is basically a sheet. Ambient lighting emulates the fact that in real life nothing is completely dark. So to simulate this we basically add a bit of fake lighting to the color. You need to implement specular lighting that is the phenomenal when you are at gazing angles between geometry the light reflects perfectly with your eyes so you see the object shining and then you have finished implementing the fun shading model. With PBR you need to introduce Fresnel effect to your shading algorithm. Fresnel is that you can get more lights at their edge that is not really easy to implement. So basically I'm more towards the old style fun shading because it's simple. We'll end the session here if you have more questions I think you can contact directly Roberto outside. Thank you very much to you and Roberto.