 In this series, we cover the essentials of OpenGL and 3D rendering. The bulk of this series follows examples from LearnOpenGL.com, created by Joey De Vries, and I highly recommend if you go through the videos to follow along with the site as well. On the site you'll find a link to all the code examples, and I also strongly recommend that you take a look at the code examples for yourself. The code is all in C++, but it sticks to the basics of a language, so as long as you have some familiarity with, say, C or Java, then you can probably follow along just fine, and I'll walk you through the key sections of the code line by line. Math-wise, you don't have to come to this knowing more than basic algebra and trigonometry. We make heavy use of matrices and 3D rendering, but most of what you need to understand will be covered as it's introduced. So to start off in the series, we first want to draw a triangle to the screen and understand exactly what we're doing when we do so. To that end, there's quite a bit to cover. We first have to talk about how transformation and projection matrices take our coordinates from local space to world space, then to view space, then to clip space, and finally to what's called normalized device coordinates. Then there's the rasterization process, wherein triangles are broken down into pixels. This is actually mostly handled by the hardware, so you don't really have to understand it, but we do cover the gist of how it works. In modern OpenGL, which is OpenGL version 3 and onwards, which is almost actually 10 years old at this point, to put anything on screen, we have to write shader code. We call them shader programs, but they're really just pieces of code that run on the GPU, generally quite short pieces of code in those cases. And in OpenGL for shader code, it's written in a language called GLSL, Graphics Library Shader Language. It's a quite simple language that's easy to pick up if you already understand other languages, so it's not really a big deal. The primary kinds of shaders we're going to write are vertex and fragment shaders. The vertex shader is what's responsible for transferring vertices into clip space, and then the fragment shader gets the rasterized fragments, meaning the pixels, and the fragment shader's job is to decide what color that fragment should be, that pixel. Vertex attributes and uniforms are the data that our shader programs operate upon, as we'll get into. Having covered all these topics, we can write a program that does real 3D rendering, like this one here that displays a bunch of cubes in 3D, and we can move around. The next big topic is lighting using the classic Fong model. Here, for example, we're rendering more boxes, but the little white cubes represent light sources that are hitting the boxes, and we also have a spotlight coming out of the camera. To render things more interesting than triangles and cubes, we're going to want to load models from external files, and for that purpose, we use a library called Asmp, Asset Importer. We'll then cover a few rendering effects, such as using cube maps for skyboxes and reflections, using depth and stencil buffers, and transparency. For example, here we're rendering another simple cube, but we're using a cube map to render reflections on the cube, and using that same cube map as a skybox. We'll then cover the use of frame buffers, multi-sample anti-aliasing, geometry shaders, which are optional shaders run in between your vertex and fragment shaders. Instancing is something that allows us to render many objects more efficiently. We'll render shadows with shadow maps, and look at normal mapping. Here, for example, we're using Instancing to render a crap ton of asteroids, far more than we otherwise could sensibly render. And here's an example of normal mapping, which is a technique that allows us to, in a sense, fake geometric detail on a surface that otherwise isn't actually there in the geometry. Lastly, we'll cover gamma correction, 8-star tone mapping, bloom, deferred rendering, screen space ambient occlusion, and we'll at least cover the basics of physically-based rendering and image-based lighting, which are both quite complicated topics that get into some more advanced physics and math. This example here uses physically-based rendering and image-based lighting. The spheres in this grid here, from left to right, they get progressively less shiny, and as you go from bottom to top, they get progressively more metallic. And as you can see, our environment here, this rendered from the skybox, if you zoom in, it's actually being reflected on the shinier spheres. That's thanks to the specular image-based lighting. So that's everything covered in this series, at least as of this date. I have some plans to expand it with further topics, so look for those later.