 translation on the internet. It's about 3D programming on Raspberry Pi, mainly. And Falkert is talking about that because he likes to do this stuff, especially in his spare time, since he thinks his job is a bit boring. But you are welcome to Falkert. Thank you. Thanks for the introduction. I'm system architect, so it's not always boring, but it's very different from programming 3D graphics. So when I get bored with all the really abstract cloud stuff, then doing some coding very close to the hardware, to the metal, is actually a nice distraction. So that's what I've been doing in the past few months, basically playing around with Raspberry Pi and seeing what the OpenGL HGPU can do with OpenGL on that. So I wanted to give a short summary of how that works so that you, ideally, after watching this talk, know where to start if you want to write your own OpenGL 3D software. So this talk is about OpenGL ES programming. We have three parts. We first talk a bit about the OpenGL ES concepts, like how the rendering pipeline works, and in general, the shaders. Then there will be a tiny bit of mathematics, a bit of algebra, just enough to get you started that you know where to look. And then finally, in the third part, we're going to have a look at example code that's then going to have the, basically, the plan is to have a little logo of the Chaos Communication Camp spinning around. A slight disclaimer, unfortunately, for various reasons that didn't actually quite work yet. So you're not going to see any kind of smooth animations in this talk. But if you check back later, like in the next few days, I'm confident that we can make it work. And that should give you a good introduction to the whole thing. I'm going to put the slides on my website. They're probably going to be on media CCC as well. So not in this talk. OpenGL is a very complex topic. So technology, there's lots of different things. So there's quite a lot of things that we just don't have time for in 45 minutes. So if you're missing anything, feel free to ask me about it later. Right. Let's get started. OpenGL, yes. A brief introduction. We start with OpenGL. Open Graphics Language. That's an API for rendering 2D and 3D computer graphics, like all good things. It was invented in the 1990s, developed by Silicon Graphics, incorporated as GI. Nowadays, it's an open standard, maintained by the Kronos Group consortium. And it's used all over the place. So it gets computer-aided design, visualizations, desktop compositing, so all the nifty effects on your Mac OS or iOS machine art, obviously, games, obviously, virtual reality, augmented reality. That's all basically using OpenGL under the hood. Of course, there's lots of abstraction layers available, stuff like Unity 3D and other engines. But it all boils down to OpenGL eventually. OpenGL ES is OpenGL for embedded systems. It's the more current standard. There's implementations by all major manufacturers, NVIDIA and AMD. The most important ones are building these GPUs. OpenGL ES is available on all major operating systems and platforms. You get it on Linux, Mac, Windows and all. And there's bindings for pretty much all the major programming languages, C, C++, Java, even Python and Ruby. In general, you do want to use a compiled language, though, for speed. I personally like to use Go for that, because it's in general a bit tighter and has the same speed as a raw C, but the syntax is much more readable and maintainable. So the latest version is OpenGL 3.2, Raspberry Pi, Resbian Stretch supports OpenGL ES 2.0, which is good enough. All the important concepts are already in place, so that's already very cool. And then there's one more thing, which is called EGL, a native platform interface, which you need in order to tell your environment, your operating system to give you an OpenGL context. So before you can start writing or running your OpenGL code, you need to first get an OpenGL context. This works very different, depending on whether you're on Linux or Windows, whether you want a full screen or a Windows application, whether it's on a phone or somewhere else. So in general, that's called EGL, but for different platforms, there are different other libraries, WGL, CGL, GLX. MISA is an open source implementation. So basically, if you want a program on your platform, then you need to use to find some kind of interface that will set up OpenGL for you, and that code will be platform-specific once you have the context. The code is pretty much portable between different platforms, which is a nice thing about OpenGL abstracting away from that. Right, so just very briefly, OpenGL is a Rasterizer. That means that the goal is to describe some kind of virtual scene. Virtual scene will have geometry, so kind of like pyramids and cubes and stuff, or if you want to have a bunny, then you need to describe that bunny as triangles. And it will also, each of the geometry will also have some kind of color or material associated with that. The geometry, we represent as triangles, and the triangles represent as vertices. So basically, in this example, that cube, we can describe with these eight corners, each corner is a vertices, and then these corners describe the edges between those and the edges describe the surfaces of the cube. And then if we want on the right-hand side, if we want to have a nice logo on there, this would be a texture. Textures come from image data, so bitmap data, like a PNG, for example. The way that the Rasterizer works is we describe our geometry in world coordinates, so this is all still like your code in the GPU, in three-dimensional space. The Rasterizer then takes all that geometry and projects it onto a projection plane, two-dimensional surface, which is basically your screen. After that, we have two-dimensional fragments. The vertices are called fragments after they have been projected. And these fragments then get mapped with textures or colors, depending on what you tell it to. But basically, those three stages, you describe your geometry in three-dimensional space that gets projected down onto a surface. And then for each of the triangles, which are now flat on that surface, we figure out how we want to color them. And then if we have a triangle on that surface with three colors, then the Rasterizer will interpolate the values between those colors so that in the end every pixel on your screen has a desired color. That's the very high-level description of the Rasterizer. Let's look at the architecture of OpenGL ES specifically. On the slides, you can see on the left-hand side is your OpenGL broker. That's the code that you write and see your C++ or GO. And that's running on your central processing unit on the CPU. On the right-hand side, in contrast, is the GPU graphical, like your graphic cards, which is much faster. And on there, we have two things that run. Code that you write yourself. A vertex shader and a fragment shader. Those are written in the GLSL shading language. And those basically do the projection and the texture mapping that we saw before. So your OpenGL program will upload your geometry data in your GPL program. You will operate your geometry data to the GPU into vertex memory. You will then also upload your texture data, so your bitmaps and whatnot, into texture memory. You want to do that as little as possible because the bridge between CPU and GPU is slow and expensive. But once it's on the GPU, your vertex shader and fragment shader can access the data very quickly. So your OpenGL program uploads the vertex data, uploads the texture data, specifies that the vertex shader and fragment shader. And then the way it works is when you render, you see the vertex shader is responsible for taking all your geometry and then transforming it, moving it around, scaling it, and ultimately projecting it. It then gets passed into the Rasterizer stage, which does the actual projection, figuring out which of those vertices end up where on the 2D surface, throwing weight a few vertices as well and there's lots of rules. But ultimately, you end up with a bunch of fragments that then pass into your fragment shader. And the fragment shader takes those fragments, also accesses the textures and texture memory, and then writes basically color values into the fragment pipeline. And afterwards, basically the stuff that you draw pops out in your frame buffer, shows up on your screen, and you see what you've done so much for the architecture. Let's talk a bit about the shader language in which you write the vertex and fragment shader. It's a domain-specific language. C-like syntax, but it's manageable and it's only the very basic features of C. It comes with built-in vector types and matrix types. We're going to talk about that in part two in a bit more detail, but basically have types for two-dimensional, three-dimensional, four-dimensional vectors, and for matrices built in. And you also have a bunch of mathematical functions like trigonomic functions, sine and cosine and such, square root, exponent, log, and vector multiplication, which we're also going to mention in a bit more detail. You don't have any recursion. You can make functions. You can write functions, but you're not allowed to call yourself. And there's only limited support for loops, so you can have loops, but you need to know how often they're going to run through at compile time. This is because in that rendering pipeline, it's very tight for speed and the GPU needs to know how long it will take in order to set things up correctly. So we'll have a quick look at how the vertex shader and fragment shader look at in detail, because that's really the core of the whole thing. All the exciting stuff happens in the shaders. So we're going to look at what inputs and outputs each of those shaders have. Starting with the vertex shader, as mentioned, the vertex shader runs once for each vertex that you use to describe your geometry. So if you have a triangle, the vertex shader will run for each of the three points. If that triangle is part of a cube, it will be like six times two times two, well, six times three times two vertices all together. And the more complex the scene, the more vertices you have. Each of those vertices is processed by the vertex shader. The main inputs for the vertex shader are called attributes, vector or vertex attributes. Those are different for each vertex. You can define them yourself, so you can specify exactly which ones you need. Typically, this is a position, but we common as well a texture coordinate associated with a vertices say we can also just give directly a color that we want to color that vertex in later or whatever you really need. So those are under your control to all input it as attributes, vertex attributes. There's also the option to give global parameters that are the same for each vertex, the vertex shader processes. Those are called uniforms and that's typically, for example, your projection matrix or the way that you define exactly from which point you're looking on your scene and how it gets projected onto the surface. The vertex shader is responsible for passing any data onto the fragment shader that might be needed by the fragment shader. This you do by varying variables which typically you would take from the attributes, attribute as the input, you would then basically pass them through which you have to do explicitly and then they will be available to the fragment shader. The most important role of the vertex shader is of course to specify the final position of each vertex that it processes and you do that by setting the gl underscore position magic variable, the four-dimensional vector. Typically, you multiply it by some kind of matrix to get the final position in the scene and going onwards to the fragment shader. Again, this runs for every fragment which is a resturized projected down vertex on the surface, so each vertex gets projected down by the resturizer, then becomes a fragment and the fragment shader runs for each of those fragments and the fragment shader's task is to then set the final color of each of those fragments. We have again per vertex input as described before, this is what the vertex shader is supposed to supply to the fragment shader in the varying variables. Again, here we see we have the texture coordinate and color attributes and varying attributes and then most importantly to get to the texture data, so usually if you want to color one of your fragments, you want to use some kind of bitmap in order to draw a logo or something and this you do by using a sampler 2D uniform variable and then finally, as I said, the task of the fragment shader is to specify the final color of each fragment, this you do by setting the gl underscore fragment color special variable. So that's the basics, quite confusing at the moment I'm sure, but it will all become a bit clearer once we look at the actual code, but before we can do that, it's getting slightly more confusing I'm afraid, we have to look a little bit at what kind of mathematics are required to model what we're doing. So linear algebra is the mathematical branch that we're dealing with here and my opinion it's easier to learn algebra by doing programming than to learn programming by doing algebra, so in general it's good if you just play around with it a bit. We want to describe our scene as three dimensional objects, those objects we describe with three coordinates x, y, z, means we have three axes, we use a right hand coordinate system, basically you do this, your thumb is the x-axis, it needs to point to your right, the index finger points up, that's the y-axis, and then your middle finger points toward yourself, the z-axis, and that's why you, that's how you remember how those related, it's also left hand word coordinates which work differently, so that's the good way to remember. You need two basic mathematical constructs, vectors and matrices, vectors describe coordinates in space or on the surface, and matrices describe transformations. I'm not going to go too much into detail, basically just giving you an overview of what you need to know and study, maybe in uni or maybe look online. Vectors can be used to describe positions, three-dimensional x, y, z, two-dimensional x and y, if it's like a texture coordinate on top of a surface, you can also use vectors to describe colors, in this case you have red, green, blue and alpha components. There's two operations that are important for vectors, there's a scalar multiplication also called dot product, basically you multiply a vector with a vector and you get a real number, if you multiply a vector with itself, you get the length of the vector which can come in very handy for doing calculations, if you multiply two vectors with each other, you get the cosine of the angle between them, so if you multiply the x and the y vector, then you get a 90 degree, the cosine of 90 degree, also very useful, and then finally there's the cross product vector multiplication, which is multiplying two vectors with each other yields another vector, and that vector will be orthogonal, so if you multiply the x vector, x-axis vector with the y-axis vector, then you get an orthogonal z-axis, which is very useful if you do lighting and things like that, and shading, so much for the vectors, so matrices describe transformations in space, a three-dimensional space, they have four by four dimensions, so 16 cells, fortunately you don't really need to understand the matrices in detail, because OpenGL has a lot of helper functions to produce those, what are the transformations that we can do, we can have a translation matrix, so basically moving a vector or a set of vectors in space, so left up or whatever, to the back, we can rotate a vector around another vector by a given angle, a rotation matrix, and we can scale vectors or a set of vectors, so make them taller or wider or all of that together, and there's also projection matrix, which takes a vector from R3 and puts it into two-dimensional space, there's multiple projection matrix, most popular is perspective matrix, projection matrix, which is a vanishing point perspective, it basically means that things that are closer up appear bigger than things that are further away, but ultimately those matrices will make a vector become two-dimensional, there's two matrix operations that are important to us, if you multiply a vector by a matrix, that means you get a new vector that has the operation represented by the matrix applied to it, and you can multiply matrices with each other, which basically change the operations each matrix represents, so that you can keep a set of transformations in a single matrix, you only have to do the multiplication once, so that's about it, very brief overview of the required algebra, let's look at a code, we're going to look at a tool, well a single file script, hellocamp.go, the goal is to just render the logo of the chaos communication camp on a square in the middle of the screen and then make it bounce around or rotate along the y-axis a little bit, and we want to specify the colors, but then take the actual shape of the logo from a texture, the example code, again sadly cannot demo it to you today, but I'm sure it's going to be sorted soon, so you can download it on my website, this should work on a Raspberry Pi 3, Raspberry Pi 3, Model B, Raspberry and Stretch, pretty much out of the box, so your OpenGL program has in general the following program flow, first you need to configure the OpenGL context, which again using EGL just tells the machine where to do your OpenGL, the environment in which to do the OpenGL code in, so that's all pretty boring, after you have your OpenGL context configured, you need to initialize your scene, and this has multiple steps, you need to initialize your geometry data as vertex data, you need to initialize your color material stuff as texture data, you need to initialize your vertex shader and fragment shader, those two then get linked together into a shader program and then you want to usually also initialize some kind of camera which you can use to control from which point you're looking at your scene and manipulate that later, so we're going to look at that in more detail, but after you have initialized your scene, you enter your render loop, and the render loop is really just update the scene with user input or just because the some time has elapsed for animation, and then we draw the scene and then start all over again around 60 times per second, so configuring the context looks very different depending on your platform on the Raspberry Pi, it's basically that, there's a call called create context, you can check for error, you can ask then your library about the width and the height of your screen which will become handy later, and then you say that should be the current context and you initialize the GL subsystem scene initialization, basically you need to initialize all the stuff as described so that looks pretty straightforward, you also should keep track of the starting time so you want to get the current timestamp and store it in a variable, so let's look in more detail how that those initializations look each, initial initializing your geometry, your vertex data, in the easiest way we can do that is by just creating a static array of floats which then can be passed as vectors, in this example we have two triangles, one triangle with the vertices ABC and the next triangle with the vertices ACD, each of the vertices is given in a single line in the variable, and each of those vertices has multiple components, so the position XYZ as the position in space, then the texture coordinate that belongs to it, so where the texture should be mapped onto that specific vertex, and then finally the color red, green, blue and alpha that we want to use on that corner of the triangle, so that's like just a constant like hard coded variable, and then you tell OpenGL to load that geometry vertex data into the GPU's vertex memory, which you do basically, you generate the buffer, GL gen buffers, and you bind the buffer, then you say GL buffer data, pointing it at the vertex data telling it how long that, how much that is, and then you check for errors to see whether something went wrong, next you want to initialize your texture data, so in this case a logo, PNG showing the logo of the case communication camp, first we load the logo from file, we just load the PNG, we then use that to draw into a Golan image, RGBA image object, we draw the data we just loaded into there, then we have it in the required RGBA format, so after this step, after the first four steps we have the texture data variable, and then we need to tell OpenGL to load that data into texture memory, which again, typically you just use GL gen textures, you say it's active texture, then you bind the texture, and then the GL text image 2D call will tell OpenGL to take the data and put it into the GPU's texture memory, next we initialize the vertex data, the vertex data we can just keep in a static string, like a hard coded string in the Go source, so this vertex data is very simple, it takes a uniform camera matrix that we will use in order to specify the projection and to do the rotation, and it takes the position texture coordinate and color attributes that we specified in the vertex data before, we then pass on the texture coordinate and the color to the vertex data in varying variables, and finally we just say that the GL position should be the camera matrix multiplied by the vector position, so a simple three line shader, and then since we now have that string, we need to tell OpenGL to load and compile that shader, which again, you do GL grade shader, then GL shader source, GL compile shader, and then you check for error, after that the vertex shader is in compiled form on the GPU, next we do the same with the fragment shader, also very simple, takes a texture uniform variable as input, then the two variants that are passed from the vertex shader, so texture coordinate and color, inside the shader then we just say, we first find out the color of the texture at the specific texture coordinate, so we use the text coordinate, text coordinate, varying to look up the color value inside the texture uniform, and then we specify the GL frag color, the final color of that fragment, and we want to use the red-green blue as specified in the vertex attributes, but we take the alpha value from the texture that we loaded into texture memory, and then again need to do the same thing as before, exactly the same code, basically GL grade shader, shader source, compile shader, and check for error, after that we have both our shaders in memory, now we need to link them together into the shader program, which again looks very similar, there's two, like we do GL grade program first, then you attach both shaders to that program, you tell it to link the program, and check for error again, after that you have the shaders linked in memory, so the varying variables will now of each other, and last thing you need to do is to specify where to find your vertex data, which you do by calls to GL vertex atrop pointer, so you give it like this three codes of block that do basically the same, first for position, then for text core, then for color, we just say at this point in memory we have that many floats following each other that represent the given vertex attribute, after that the program is in memory and ready to go, finally we want to set up our camera, for this we first set up our projection matrix, which we can compute by using the width and the, sorry, which we can do by looking at the ratio of the screen, so we divide the width by the height of the screen, we decide on a field of vision, in this case 45 degree, and then there's the mgl32.perspective call, which returns a projection matrix with that specific ratio and field of vision, and a near and far projection plane, next we want to specify where in space our camera is, we want to have the camera a bit down onto the set axis so that we can look onto the zero, like the origin of the coordinate system, as you might have noticed we described our triangles directly situated on the origin, so if they're here then we want to look from a bit far away, and this is the same thing there's a call mgl32.lookAtVector, which takes the position of the camera, what it should look at, the position of that it should look at, and the vector that says what is up, so now that we have that projection and few matrix variables, we can multiply them together to a single camera matrix variable, which will then be used in the shader, so we just start with the identity matrix, then multiply the projection matrix, multiply the few matrix, and now we have initialized our camera, so that's the complex stuff, the initialization of the scene is typically the most verbose code that you need to do, which is good because we only need to run it once at the beginning of our code, know that the scene is initialized, we can start with the render loop, as mentioned before, we need to update our scene, and then we need to redraw our scene updating, and this simple example is pretty straightforward, we just want to know how much time has elapsed since the start of the program, we do that by getting a current timestamp and subtracting the start time from it, then we have a number of seconds, like a floating point number of seconds, from this number we can generate the angle by which we want the triangles to rotate, so we just take the sign of the elapsed time and multiply the sign with 45 degrees, so basically we have values between, the sign will return values between 1 and minus 1, so the angle will have values between 45 and minus 45 degrees, with that angle we can now generate the rotation matrix, this is done by the mgl32.homoGrotate3dy, a function call, it takes an angle, and then it returns a rotation matrix that rotates along the, with that angle around the y-axis, and then we have the rotation matrix, we can multiply it by the camera that we have set up before, and then we tell gl to use the index of the first element of that rotation matrix as a pointer for that uniform that we defined in our vertex shader, so after this update step, the uniform variable in the vertex shader will have the value that is now computed of that rotation dynamically from the elapsed time, as well as the projection and few matrix that we had set up before, now that we updated the scene, we need to draw the scene so that we can see the results of that, and again this is pretty straightforward because we did all the hard work already, basically we need to clear the screen, so delete or remove all the color buffer and depth buffer information, we need to tell it to use the program that we have set up before, we need to tell it to use the buffer that we have set up before and to use the texture that we have set up before with that in place, it's a simple call to gl draw arrays, in this case we tell it to paint six triangle, well six vertices as triangles, which means two triangles, three vertices each, and then again check for error, so this should be the scene, should now be drawn, which means now you have to start again, going back into the render loop and render at 60 times per second, usually you would have a sleep of like 160th, a sleep call there, there's more sophisticated ways to get smoother frame rates, which I cannot go into in this point, but basically you start, this is the tight loop that you want to do as often as possible, a minimum 60, so that's about it as that, I'm very sorry that I'm not able to give you any kind of actual animations or visuals that show you what I've been talking about for the last half an hour, sorry for that, I will make it work, I'm confident in the next few days, so please if you're interested, I go to the URL that I showed you before, I download the program and then you can study it in detail, as will contain all the code that we looked at in the talk just now, finally there's some resources if you're interested in more details, so acronos.org publishes reference pages for all of their standards, which is very useful, so some open standards, so you can get that without registering or anything at the top URL, there's also handy reference cards that I can use for programming, there's a book called OpenGL ES2.0 Programming Guide, which I find very useful to give you an overall a few of the whole thing, and they used to be the red book, they used to be the Bible OpenGL Programming Guide, the red book, which is now a bit outdated unfortunately, but might still be worth a look, that's it, thank you very much. Thank you, Falkert. Do we have any questions? There are microphone angels, one here, one over there, and there's a signal angel which I am unable to see, no questions from the internet, no questions in the, oh there's one question, go ahead please. Hi, thanks for the talk. As far as I understood, if you want to update a scene, you calculate the transformation matrix you need, like you change the angle of view, then you calculate the transformation matrix in your language of choice, like Go or Python or whatever, and then you upload the transformation matrix to the GPU, and then you redraw the scene by telling the GPU to redraw every every vectors with this new transformation matrix, is that correct? That is what I said and that is correct, however that's not the only way to do it, you can basically also just pass in the time into the vertex shader and then do the same calculations on the GPU, or you could like generate the angle on the GPU, then pass the angle as a uniform into the, into the vertex shader and then do the, do the matrix generation there. It depends on, you know, what's the most expensive part in this example, we chose to use the most straightforward way that demonstrates how to do it the best, but yeah in the end you can do it either side and that's a bit of the art of it to decide like what's easier to do in, on the CPU in Go or what, and what's easier to do in the vertex shader, it really depends on the nature of the thing. So I could just pass the new angle to the GPU and then do all the calculations? Yeah, you can pass the start time and current time, then you don't need to even pass the angle, you just do all that, or you calculate the angle from it, pass only the single float or you do the matrix, calculate computation and pass the whole matrix, it's really up to you, but keep in mind that you need to pass the matrix anyway for the protection and, and few matrix, so it makes somewhat sense to multiply then the final rotation on the CPU side as well, but again right depends on how many of those you have and how often you need to do it and all that. Thanks. Any more questions? Wow, that's rare, such a, it's late. Well-held presentation and no more questions left, so a good warm hand to Falkirk and thank you for your talk and thank you. Thank you for your comments.