 assistant professor, electronics and communication engineering, Walton Institute of Technology, Singapore. Today, we will discuss contrast stretching, gray level slicing and bit plane slicing associated with the image enhancement, learning outcome. At the end of this session, students will be able to apply these three algorithms like contrast stretching, gray level slicing and bit plane slicing for image enhancement. Contents in this session, we will discuss these three algorithms which are used for image enhancement that is contrast stretching, gray level slicing and bit plane slicing. Let us see what is contrast stretching. Most of the time, we require the image to be enhanced for contrast and for that, we require contrast stretching algorithm. Rather than using a well-defined mathematical function for the contrast enhancement, we can use arbitrary user-defined transforms. If you look at the transform for contrast stretching, this is what the transform used for contrast stretching and on the x-axis, we have the input gray level value that is indicated by r and on y-axis, we have the output gray level value called S. This input gray level value is varying from 0 to L minus 1. If for example, we consider pixel in an image is represented by 8 bits, then this L minus 1 value will be 255 because 2 raise to 8 will be 256. But since we are starting from 0, this L minus 1 value will be 255. Similarly, on y-axis, the output gray level values also will range from 0 to 255 and if you look at the transform, we observe here is the input pixels within this range that is 0 to this r1 and s1, the input pixels are enhanced very less because the slope of this line as compared to the line between point r1, s1 and r2, s2. The slope of this line is more than this line, so the input pixels within this range will be enhanced less. The input pixel within this range of the input that is r1, s1 and r2, s2, within these two range, the input pixels are enhanced more so that the contrast of the image will be improved. So, this is what the T of r that means transform on the input function that is r where r is an input pixel value. Similarly, if you look at the slope of line from r2, s2 to L minus 1, we observe that the input pixel within this range are also enhanced very less. So, what we do is here, we stretch the contrast of the pixels within this range. This is the input image. If you look at the image has very low contrast and so if you apply this contrast stretching algorithm or we can say this transform, we can say on this input image we get the output image like this. We see that a prominent enhancement operation has been done on this input image by using this contrast stretching transformation function. This gray level slicing is mostly if you look at the diagram of this transformation function actually boosting the pixels within the input pixel value A and B. If the input pixel value A and B, if the input pixel value ranges between value A and B, we enhance it. That means only a specific gray level value is boosted and others are not boosted, highlights a specific range of gray levels. So, this is what the range of gray levels which is highlighted by the transformation function. So, function is similar to thresholding. Other levels can be suppressed or maintained. Other levels means other than between A and B, this is a sample input image for this transformation function that is gray level slicing. This is an input image and we see that the result of applying this transformation function that is gray level slicing on the input image is like this. So, the user is interested in highlighting only these gray levels by using this gray level slicing algorithm. Let us have a question. What is the difference between contrast stretching and gray level slicing? You pause the video and answer the question. So, the difference between contrast stretching and the gray level slicing is very specific that the gray level slicing highlights only some portion of the image whereas contrast stretching stretches the contrast of whole image. That last algorithm for image enhancement that is bit plane slicing. For example, you assume that a pixel in an input image is represented by 8 bits. What is here done is least significant bit that is bit 0 of all the pixels is collected together and presented in a bit plane 0. Similarly, there are such how many planes we can construct if the input pixel is represented by using 8 bits we can construct 8 planes. So, this is a bit plane 0 that is of least significant position and this is a bit plane 7 which is representing the most significant position. So, often by isolating particular bits of the pixel values in an image we can highlight interesting aspect of that image. For example, higher order bits usually contain most of the significant visual information. Lower order bits contain a subtle details means less details are present in the lower order bits. So, if you want to represent the image by using these different planes we can see how we can represent the image. So, if I use to represent this input image using the least significant bit then I get this information. So, we see here least significant bit consists very less information do not we do not have the information present in the original image. As we go on increasing the bit position for example, this is a jureth bit, this is a first bit, this is a second bit, third bit, fourth bit, fifth bit somewhat we get the input image equivalent in. So, this way we can represent the input image by using a bit planes different bit planes. Even we can combine for example, if you combine this 2 and 3 bits together we will get more appealing image because this contains more information than this and if I combine these 2 we will get a good image. So, this is what the bit plane slicing. So, I used this book, text book for preparing these slides. So, the digital image processing by Raphael C. Gonzales and Richard E. Woods by Tata McGraw-Hill Education. Thank you very much.