 Hello friends, myself Darshan Pandit, Assistant Professor, Department of Computer Science and Engineering from Walchand Institute of Technology, Solapur. So, today we are going to discuss about visible lines and visible surfaces and how to identify visible lines and visible surfaces by using Z buffer algorithm. So, the learning outcome is like at the end of the session student will be able to identify visible lines and surfaces through Z buffer algorithm. So, in this first we are going to see what is hidden surface after that what is Z buffer algorithm. So, in the diagram you can see you can see three image that is wire frame after that hidden line removal and hidden surface removal image ok. So, in wire frame image just the 3D objects are drawn without removing any lines ok. So, here it is very difficult to identify which is the visible part of the object and which is hidden part of the object ok. So, in easy terms so, it is very difficult to identify which object is above and which object is below ok. So, when you remove the hidden line that is in second image you can see. So, here you can identify which object is above and which object is below ok. And in the third image hidden surface removal so, here you can easily identify which object is visible and which object is invisible ok, which object lie on the top of another object ok. So, this is nothing, but hidden surface problem. So, here hidden surface problem is one of the more difficult in computer graphics ok. So, visible surface algorithm attempts to determine the line edges surface or volume that are visible to an observer which is located at a specific point in a space ok. So, when we view a picture containing non-transparent object and surface we must remove hidden surface to get realistic screen image ok. The identification and removal of the surface is called hidden surface problem. So, basically there are two methods for hidden surface removal problem that is first is object space method and image space method. So, in object space method it determines which part of the object are visible. In this method various part of objects are compared. After comparison visible, invisible or partially visible surface is determined ok. So, this method generally decide which is visible surface ok. So, in wire frame model this method are used to determine which are the visible lines. So, this algorithm are line based instead of surface based. So, when you go with image space method it determines per pixel that is which point of an object is visible. So here position of various pixels are determined. It is used to locate visible surface instead of visible lines ok. So, each point is detected for its visibility. So, if a point is visible then pixel is on otherwise pixel is off. So, this algorithm used to locate visible surface instead of visible lines. So, Z Buffer algorithm is developed by Catmull. So, it is an image space approach ok. So, where we consider the surface of an image ok. The basic idea is to test Z depth of each surface to determine closest visible surface to the observer. In this method each surface is processed separately that is one pixel position at a time across the surface. This method also stores the intensity of an object that is to be displayed on the screen. The depth value of a pixel are compared and the closest smallest surface determine the color to be displayed in the frame buffer. Surface can be processed in any order ok. So, to override the closer polygon from the far one two buffers are required those are frame buffer and the depth buffer. So, here depth buffer is used to store depth of x y pixel position as surface are processed in where depth is greater than 0 and less than 1. And in depth buffer initially all positions are initialized to minimum depth that is 0 to the most distance depth from the view plane. And the frame buffer stores intensity value of each pixel position and initially all positions are initialized to background intensity. So, here when you want to map a window from world coordinate space to viewport you require normalization coordinate. So, the Z coordinate are usually normalized to the range 0 comma 1 where 0 value of the Z coordinate indicate back clipping plane and 1 value for the Z coordinate indicate front clipping plane ok. So, in the diagram you can see the back clipping plane and front clipping plane in a with the normalized coordinate. So, the algorithm works as follows. So, in step 1 we need to set buffer values that is Z buffer x comma y equal to 0 and frame buffer x comma y equal to background color. So, initially first are Z buffer and frame buffer are initialized to minimum value. So, whatever background color is there that is set to frame buffer and 0 for Z buffer. After that in step 2 scan convert each polygon in arbitrary order that is we need to get number of polygons which are present on the screen. In step 3 for each pixel x comma y in the polygon calculate depth Z at pixel position x comma y ok. So, we need to calculate that is identify depth of each polygon at x y position. After that in step 4 we are comparing the depth Z x comma y with the value stored in the depth buffer at the location Z buffer ok. So, if Z x comma y greater than Z buffer x comma y then we have to write the polygon attribute that is intensity and color to the frame buffer and replace Z buffer x comma y with Z x comma y. So, Z x comma y is the new depth which is replaced to Z buffer x comma y and also we need to update the frame buffer value. So, whatever color and intensity that object will be having so that color and intensity will be added to the frame buffer ok. So, that is frame buffer x comma y equal to surface buffer x comma y ok. So, the new color and intensity will be added to the frame buffer ok. So, in the diagram you can see ok. So, first we are having 3 polygons that is a b c ok and x y is the view point ok. So, from the view point a is closer ok, b is further than a and c is further than b ok. So, in this you can see as a is the closer surface and has smallest depth at view position x comma y and it is having highest Z value. So, that is why it is visible that is surface e is visible at the pixel position a ok. Suppose, if b is having highest depth then first b will be visible after that a will be visible ok. So, as a is closest that is why it is having highest depth so that is why polygon a is visible. So, this is how we are identifying visible lines and visible surface by using Z buffer algorithm. So, here you can pause the video and answer the question, provide one key difference between depth buffer and frame buffer. So, the answer is like depth buffer is used to store depth values for x comma y position and frame buffer is used to store the intensity value of column at each pixel position x comma y. So, these are the references which are used to create this video. Thank you.