 ok. Hello, welcome back. In the last lecture we went through the basics of visual rendering. So, the rendering part generating the the pixels the RGB values that need to be presented on the display just from a general perspective that is common in computer graphics. I have also started to talk about the particular difficulties that arise in virtual reality when these pixels are rendered to a display that is mounted onto your head for virtual reality. So, as I am continuing onward we were talking about um object order rendering and so in object order rendering we are iterating over the triangles and for each triangle we are figuring out how to shade the pixels. And so I gave you um a lot of the steps of that already now I am continuing onward. So, the part that I have not talked about is how to um propagate um color or more particularly RGB values or other any other attributes which may be surface normals or textures I will give some examples of that um efficiently um efficiently across the pixels. So, one very convenient way to do this which is commonly done is to use ah barycentric coordinates. Have you seen barycentric coordinates before? They may not be completely common ah in in terms of um what you normally see in engineering I have seen them come up several times in my 20 or more years of of research and and they have been one of my favorite coordinate system. So, they end up being useful in many many settings particularly when you need to do some kinds of interpolations over 3 dimensional spaces or higher dimensional spaces. It gives you a way to um interpolate or combine information in a very natural way across a triangle in if it is a 2 dimensional surface or more generally across a a a d dimensional simplex using d plus 1 points. So, I am only going to give the case for triangles because that is what is relevant today. So, um suppose we have a triangle that is formed by 3 points that correspond to its vertices. So, we have p 1 p 2 p 3 and now I want to consider some um interior point not necessarily at the center some interior point p and I want to give it a coordinate system that expresses the location of p in terms of the coordinates of p 1 p 2 and p 3 each one of these p 1 p 2 and p 3 if I write it as p i is going to have coordinates x i y i see right. So, it has standard um Cartesian coordinates and then I can represent the coordinates of p in the following way I can write p equals alpha 1 p 1 this is a 2 dimensional vector here and this is a scalar coefficient alpha. So, it is scalar times vector plus alpha 2 p 2 plus alpha 3 p 3 my alpha parameters all lie between 0 and 1 and they must sum to 1. So, alpha 1 plus alpha 2 plus alpha 3 is equal to 1 these alpha values the constraints on them should look familiar from probability theory. So, we are not talking about probabilities here today, but it is the same constraints you can imagine them as looking like probabilistic weights if you like. Um some interesting special cases if we decide to write p in this way it looks like p is just a simple interpolation of the coordinates of these 3 points p 1 p 2 and p 3 and um if we make alpha 1 equal 1 for example, then the other 2 coordinates have to be 0 right the coordinates are the alphas ok. So, those are the bare center coordinates should maybe make that very clear um the bare center coordinates are these alpha 1 alpha 2 alpha 3. So, the coordinate 1 0 0 would correspond to p 1 correct and 0 1 0 corresponds to p 2 0 0 1 corresponds directly to p 3. What happens if I make the p 1 component 0 and I let the other 2 components be whatever we like as long as they add to 1 and are non negative. So, so that should correspond to the edge between p 2 and p 3. So, they are beautiful properties of this right. If we pick a point in the interior that corresponds to all 3 of the coordinates being non 0. If we allow 1 of the coordinates to go negative it turns out that will send us outside of the triangle. So, if you do not satisfy the constraints you allow a coordinate to go negative you can actually parameterize the entire plane by allowing the coordinates to go negative, but by having all 3 positive you know that we are um we are picking a point that is inside of the triangle. And so now suppose that we have RGB values that we have calculated using our shader right we have shading methods that we have talked about. We calculated RGB values at these particular vertices we would like to figure out now how to color the pixels that have their centers inside of this triangle. So, these are particular values p and if I want to figure out now what the value should be I can just do simple interpolation. So, I get um let us say r g and b at some particular point p right I would like to figure out um r g b values at p and again p is a particularly chosen point that happens to be the center of a pixel where I would like to render and figure out what the um what the value should be. So, I just apply the bare centric coordinates and use the r g and b values for the vertices these 3 vertex points and same for g and b g 1. So, all we are doing is taking the r g b values that correspond to these vertices and just using the bare centric coordinate coefficients right. So, these have to be calculated they are not very hard to find for a given point and then we use those coefficients to linearly interpolate in this sense um between the r g b values of the vertices to get any point in the interior. So, that is a very quick way to propagate the information across a let us say rather large triangle without having to do shader calculations for each and every point that we sample inside make sense questions about that should also just remember bare centric coordinates is a very convenient way to do interpolation over triangles and I said 1 dimension higher you can you can you can interpolate over a 3 dimensional volume using 4 points um which ends up forming a a tetrahedron of some kind and it works in higher dimensions as well. So, it is a very powerful idea very practical for a lot of engineering applications um finite element and methods and all sorts of things um. There is other kinds of mappings you might want to do in addition to r g b. So, let me give some other examples and then I will um go into some of the specific problems that virtual reality causes um for these types of techniques. So, other mappings over triangle. So, one common kind of mapping is texture mapping. So, it may be the case that um over my triangle I may have some kind of textured pattern that needs to appear and we again can use the bare centric coordinates to index into an image of the texture and then fill in the pixel values according to the texture. So, if you want to do um implement a kind of texture that looks like a let us say a carpeted floor then um then you would um use a an image of that and then propagate that across the triangles and then you have some delicate issues to worry about if you have a bunch of neighboring triangles they are all connected together in some way and you need the texture to correctly propagate across those. So, you also have to be very careful with regard to that because the texture patterns might not match up very well on the edges. So, there is some um special techniques for handling that as well um another issue that comes up in texture mapping that is um challenging is that what happens when I render this at different scales. So, it may be that there is some kind of texture or pattern appearing here and it has some kind of um wavelength to it you may imagine and then if this triangle is very far away from the the viewing location then the triangle be very small and then the pixelation may interfere with this textured pattern. And so, there is a very well known and widely used technique called nip mapping which involves storing these texture images at different resolutions and optimizing them for each resolution and then when they are rendered in practice um different um different levels of this resolution are selected that are appropriate based on the essentially the number of pixels that are that you have at your disposal for rendering the image. So, the appropriate resolution of the texture will be selected and some kind of um sophisticated interpolation between the different resolutions of textures that have been stored um will be used. Generally, there is a logarithmic number of these um resolutions. In other words, there will be a full scale resolution of the texture stored and then a half scale and then a quarter scale and so forth down to smaller ones. So, nip mapping I am not going to go into the details of it, but it is a very powerful technique for overcoming some of the difficult aliasing problems that happen when you have a very small number of pixels to represent a texture right. So, that the pixel wavelength if you like gets close to the texture wavelength there ends up being some interference between them. So, you can map anything you want um you can put text as well right onto a surface. You can even use texture mapping in your um um in some game engines you can map an entire video onto a surface if you like that will just have the texture keep changing from frame to frame. So, you can make a virtual ah TV screen in virtual reality. So, couple more examples and then I will start to talk about these virtual reality challenges. Um another technique is called bump mapping in this case and here is my triangle. You can play certain kinds of tricks um or implement certain kinds of tricks by varying the surface normals across the surface in some way. So, it is another kind of hack or trick that will um generate a desired effect. So, in this case bump mapping generates an irregular pattern of normals across the surface. So, if it is a single triangle the normal the normal should be the same across the entire triangle correct, but you can instead make an artificial normal just make some perturbation to it. Just do some let us say random perturbation or do it in some systematic pattern if you like as you move across the surface. So, in terms of the bare center coordinates you could have some kind of pattern for the normals. If you do this it will make the surface look rough. So, um map a pattern irregular or irregular pattern of normals to the surface and then if you remember these shading models that I talked about in the last lecture they depend on the surface normal right. So, if I make a kind of fictitious surface normal then that will affect the shading and then I can make what would be a flat surface look rough even though I did not change the geometry. So, I have not changed the depth of the pixels in any way it is still a flat surface, but just by understanding the way shading works because it is using this dot product with the normal I will affect that and make a surface look artificially rough. So, map a pattern of normals to the surface or to the the triangle surface and then it will make it look rougher. For example, I may be able to take a smooth sphere and then triangulate it and then across the triangles I may make a rough kind of pattern. So, that the sphere looks bumpy and may look like um an orange for example, or some kind of citrus fruit with a very rough texture around it. So, you may be able to play a trick like this by varying the normals. So, that is bump mapping related to this is more generally normal mapping. I suppose I can consider bump mapping as one special case of normal mapping, but um I am going to give a different sort of particular case I guess they are all playing tricks with the normals bump mapping or this example of normal mapping that I will give um and what I can do is um use surface normals from a smooth higher resolution surface. So, for example, may have a again triangle we are always talking about and perhaps ok this triangle in terms of the depth it needs to be a linear patch, but I could imagine that this triangle is approximating a curved surface it is coming out of the board maybe it is part of a sphere. In which case I can just keep track of the fact that the normals should be mapped along a sphere they should correspond to the normals of a spherical patch. I am not sure exactly how to draw that here because I do not have enough dimensions, but you know they are just drawn in some nice way so that um this appears to be curved. If I do that I can make a flat surface look like it is smoothly curving does that make sense right. So, I could generate um what looks like a spherical patch, but it may be done with a very small number of triangles. So, this is a way to hide some of the limitations of the geometry. I may want to use a small number of triangles for efficiency, but I make it look better by actually keeping the normals correct. I have the depth incorrect, but I have the normals correct alright. So, then I can use the Lambertian shading model for example across either of these bump mappings or normal mappings to give this effect where um that is one of the depth cues that you have right is shading across the surface. So, that is giving you a sense of the depth that is artificial here. So, it is relying on that fact that you will be fooled by that. Questions on this?