 Hi guys, in this video I'm going to talk about rendering. So, what is rendering? According to Alan Tucker, Rendering is the name given to the process in three-dimensional graphics, whereby a geometric description of an object is converted into a two-dimensional image plane representation that looks real. So what that means in simple terms is taking the information stored on your hard drive or disk and turning this into the graphics that you see on screen. There are quite a few different rendering methods which are used to achieve this, but in this video I'm going to talk about three of them. These are rasterization, ray tracing and radiosity. Rasterization. Rasterizing is widely used to render real-time 3D graphics such as games. This is due to the way it balances the real-time performance needed with the ability to create the pretty pictures we've come to expect from modern games. Basically, the way this works is the rasterizer looks at the thousands of triangles that make up the 3D scene and determines which will be visible in the current perspective. With that information, the engine then analyzes the light sources along with some other environmental details to add light and color to the pixels on each triangle. Here's an example comparing rasterization to a higher-end rendering method such as ray tracing. You can see that rasterization does a good job, but it's not able to create the same level of detail as ray tracing in real time. Okay, so let's talk about ray tracing. Ray tracing is a rendering technique that is capable of creating photo-realistic images from three-dimensional scenes. The way it works is by calculating the path of every ray of light and following it through the scene until it reaches the camera. This means that ray tracing can create very accurate reflections and refractions. In general terms, ray tracing works by creating a ray for each pixel that will be displayed on screen. Then the path of each ray is traced from the camera back through the scene to the original light source. Here's an example featuring a pig in a top hat which demonstrates how this works. So in this example, the screen is represented by the eye of the bloke sitting on the stool. We then trace a ray of light back to the pig, but on the way, this ray travels through the glass, which causes the light to refract. Once the light gets to the pig, this is the point at which the colour would have been calculated. We then follow the ray further back to the light source, which in this case is provided by the lamp on the mole's helmet. Again, the light travels through the glass on the way, which adds more refraction. This happens for every pixel and for every light in the scene. This means a lot of information needs to be calculated. And for this reason, ray tracing is not yet suitable for real-time rendering. Here's an example of the type of image you can create when rendering using ray tracing. Radiosity is a rendering technique that focuses on global lighting and works to track the way that light spreads and diffuses around a scene. This is done in an attempt to simulate the effect of light bouncing around a room. This is a really good method for recreating natural shading. Have a look in the corner of any room or where the walls meet the ceiling. You'll notice that shadows tend to gather there. This is something that can be recreated really well using radiosity. This example shows a scene rendered with and without radiosity. You can see in the example on the right how much softer the shadows are as a result of the light bouncing around the scene. Some of the rendering methods I've covered here could be used together to create very detailed scenes that will run in real-time. In games, this can be achieved through the use of lightmaps. These are often used on the static elements of games such as terrain or architecture and they work by baking the lighting data straight onto the texture. Here's an example of the level of detail that can be achieved using lightmaps. So, if you're not rendering in real-time, then you could be waiting a very long time for your graphics to render. So how does Studio speed this up? That's where distributed rendering comes in. For those using high-end rendering methods, the time taken to render can be a real issue. For example, a single frame from Toy Story 3 took 16 hours to render. With a little bit of quick maths, we can see that in order to render the whole movie using only one computer, it would have taken 1,648 days to complete. That's just over four and a half years. The solution for this is distributed rendering, also known as using a render farm. This involves sending the project to a farm of networked computers which each render a small part of each frame. These parts are then sent back to the host system, which combines these pieces together to create the whole frame. This dramatically reduces the render time. Commercial render farms often have hundreds of these networked computers, or nodes, and they look like the example on screen. Click the annotation on screen or check the link in the video description for a promotional video which explains how render farms work. It is possible to build your own small render farm at home using Autodesk Backburner. OK, so that's it. If you have any further questions on rendering, then just drop me a comment below the video or contact me using one of the methods you can see on screen. Thanks for watching, and I'll see you next time.