 Hey guys, good afternoon, not the number of people that you are expecting, but hey. So our topic is reducing the effect of turbidity and water damages. And I am Pranay Goswami and I am a software engineer at McKinsey & Company based out of India. And he is my colleague and friend, Himanshu. I am based out of India and produced outside India. Hi Himanshu, I am based out of Bangalore and I am currently working in Kewka. We just get started, two more folks, so we are just catching up. So the topic is reducing the effect of turbidity and water damages. So we can move to the next slide. So yes, a picture is a poem without words. So it has been very rightly said by words that a picture is a poem without words. Pictures are a way in which visual information is expressed in a very particular way, in a better way than words. So that's why a picture is a poem without words. The amount of information that can be extracted from pictures and events, that's why the field of subdivision is growing these years and such large space. And beyond image enhancements, image definitions, feature extractions, we should question an answer. So yes, a picture is going without words. So need for underwater chemistry. So why is underwater chemistry needed at all? So the first point is monitoring money in bentic habitats such as coral reefs, gelt forests. So underwater chemistry is used in such applications. Classifying and counting the number of aquatic species spent in a particular ecosystem, then in money, archaeology to analyze seabeds and shipwrecks, also in serverless activities, which is actually the application of computer vision and underwater chemistry these days, which is being very widely used using autonomous underwater vehicles and robot operating vehicles. So these are applications in armed forces as well as for detecting, especially in the exploration of underwater gas reservoirs, and also more fun monitoring the underwater gas site lengths and more connection. So yes, so our problem statement targets the marine aquilodists. So we are trying to solve their problem to actually analyze the features present in underwater images. So this is one of the raw images from the data set. And we will run our algorithm on this and figure out the features present. At this point of time the soil, I'll just tell you what like some insight to it. At this point of time the soil that you can see, which is the artifact, that this looks very, the surface looks very smooth, like the texture is pretty smooth, but it's actually not that. I see a lot of people joining us. It's fine. So we move to the next step. So how is underwater imagery different from the image we have now? So as we all know that every medium has an extractive index, right? So water has an extractive index and due to that scattering of light happens. Also when particulate matter and turbidities scattering happens, which is actually a certain dial effect, then the inlet absorption causes color reduction and artificial lighting. Sometimes to get rid of the problem of underwater imagery, artificial lightning is used. Like green lights and blue lights are mostly used to tackle this issue, but that sometimes causes magnetic. So anyone has an idea of what magnetic is. Anyone, I think you all have must have used it. So in your Instagram applications there's a filter called magnetic. So when there is darkening in the edges of the image and the center is actually focused out, that's what magnetic is. So artificial lightning causes magnetic, which leads to loss of features for marine archaeologists. So I would reiterate here that this solution, this algorithm that we have developed and the product that we are trying to materialize now, targets more on feature extraction rather than beautification. So if you classify computer vision problem, mostly it's like we fall into the feature extraction part. So that's what, can we move to the next slide please? This is an actual problem statement. It's clearly mentioned this solution aims to reduce the effect of turbidity from underwater images for marine archaeologists. This research took place at National Institute of Portionographic India. And it started in late 2014 in the month of December as we started working on it. The data set that we have covered here is actually obtained from the underwater explorations done along the western coast of India, especially near the Swarastra coast that's Dwarka. So the artifacts obtained, the data set obtained is allowed is from there. And especially near the coastal area that's just near the coastal area, the amount of silt, the amount of turbidity and the amount of silt is very high. And that's what causes loss of features in the image. So this is an algorithm that we have designed for the image for underwater to reduce the effect of turbidity from images. So there's an input image. The input image is usually an RGB image. That's a color space. Then we convert the image into the whites here of the color space. Then we perform claying. So claying, we get onto each one of the steps in the game. So claying is contrast-limited adaptive histogramming collaboration. Then we convert the image back to the RGB color space because we want to make it coherent to the original image. Then we apply a bilateral filter on the image and then we wave the denoising. And that's the output image. So the thing that we are going to present now is actually a POC. So we can give that into the output. So step one, convert the image into white here of the color space. Why white here of the color space? There are a lot of color spaces. Why only white here of the color space? The first reason for this is that white here of the color space is a very soil-efficient model. The GPU that we are using during the chemistry was NVIDIA CUDA GD557. So white here of the color space is actually very color efficient because the CPUs here are actually the chroma components which is actually a plan plane. So it's a planar region which against the chroma components and bi-component is actually only the luma component. The luma component is actually the luminescence value. It's one of the most fundamental, it's among the most fundamental units of measurement. So it's the luminescence value. And for specificity for historical equalization, we are only concerned with the luminescence value. That's the white channel of the white here of the color space. So that's why we converted it into the white here of the color space. The other reason is that due to increasing depth, non-uniform distribution occurs. So that's why we needed the luminescence channel to do pre-processing on the image. So that's why we converted the image into the alpha ICR to the color space. And we will come to the next step. So this is the step. There's the input RGB image. This is just a sample of the sources we want. There's a simple input RGB image which is straight into RGB things. Then the RGB is actually shown into the white here. So you can see the white here has to be color space is actually just black and white or you can say it creates an image which only contains the luminescence factor. That's it. Nothing else. So yeah, can do the next step. So the second step is perform plahi on the converted image. So yes, what is plahi? Plahi is contrast-limited adaptive histogram equalization. Why do we do it? Why not normal histogram equalization or even adaptive histogram equalization? So the thing is that when we do histogram equalization, every image has a signal to noise pressure which is called an SNR. So the adaptive histogram equalization, when we do enhancement, the overwhelming amplification of noise also occurs. But we don't want noise. We want a high SNR and a low noise signal pressure. We want a low noise in the image because we want to, at the primary goal is to extract the artifact something like this. So that's why contrast-limited adaptive histogram equalization. And also, if you want to explore into the detail of this algorithm and approach, we are actually in the process of creating a GitHub wiki and a resource for our whole algorithm. So we can discuss that. Let me move to the next slide. Yeah, so this is the original image. You can see that the contours of the first image are actually student. They are concentrated at a particular region in this image. And also, the contrast changes, the changes in the contrast are not visible. The features are actually some of the features that matter very far away. And actually, we are not able to see them very clearly. But after a long, little time, the features are very clearly visible. And also, the amount of noise added to the image is not that much. But if you apply the normal and historic equalization, the noise is also amplified. So that's why it is used. Okay. So, yes, I will now move to the next slide. We will present three. So actually, what we have is an image of ICSB classes. So we have to evaluate it at the last few classes. So, I think it is important to know that the process changes over 80 years. So, as you can see, I think ICSB classes are very different. One of the things I like to do is to use the image of ICSB. So, after that, we know that ICSB classes are very different. So, what is ICSB? ICSB is actually the only image of ICSB. But it is actually the noise. So, the real problem with ICSB is it reverts the features and also, I think it is the strongest of the images. And also, it is the same thing that it also reduces the noise level. So, it is very essential for me not to use too much server and try to use the only image. And how it is there? It is there, like, because if you use the image, the layer does not get seen much. So, after that... So, it is added on the end to the group, like the Gaussian filter and also the medium filter does this. It reduces the pixel value by the average value of the tables. But the biodegradable filter reduces the different probabilistic function. You can delve into the details of that in this example. But anyone wants to add them on one of the biodegradable filters. After all, basically, cumulative distribution function looks like. We can now tack-chart talk about it. I am sorry. Yeah, please go ahead. So, as you can see in this image, so on the left-hand side, we have the original image. So, you can see in the border, there are a lot of wavelengths in white color, or white color dots, as you can see. So, on the right-hand side, here, we have the smooth end image after applying the biodegradable filter. So, there is a lot of difference in both the images. As you can see, clear. So, yeah. So, our last step is wavelength imaging. So, yeah. What we're doing this is we're constructing a signal from a noisy one. So, as we have an RGT double-edged space image, and now what the main idea is to have the amplitude rather than the location of the spectrum to be edge out of edge, I thought that of the noise. So, it mainly allows shooting of the amplitude to transform and reduce the noise. So, we aim, in wavelength denoising, we actually aim for a destructive superimposition. If you have concept, if you have read about waves theory. So, we try to have a destructive superimposition of the image and the noise signals so that noise signal and the original actual meaningful signal are out of phase. So, we try to have a destructive superimposition. So, this is the example. Please go ahead. So, on the left-hand side is the noisy image. So, after applying the wavelength denoising filter, you can see that the noise is reduced in the right-hand side. And also the SNR, eventually the SNR signal to noise ratio is increased in the second image. So, we can... So, this is the original image, which we showed you in the beginning. So, this looks, the features of this image, basically the texture of the rock, which specifically looks very clear, but isn't actually this way. So, let's find out. Can we move to the next slide? So, this is a grey-skinned image. That's actually the y-component of the YCR3 color space. So, we have extracted the luminescence component in the image. Yes. So, this is actually the feature. So, the rock actually looks like this. This is the amount of texture, the texture of the rock, which it actually is. But because of light reflections, due to the skeleton, we were able to see a very smooth image. So, one might wonder why it is so important that these features, that these textures are... Why is it so important for many analogists? So, we all have heard about weathering of rocks, I think, in our geography courses in primary school. So, weathering of rocks is actually the phenomenon by which many analogists determine the age of the rocks. So, with the texture, the actual amount of weathering is actually determined. The amount, the age of the rocks, nature in which the rocks have been kept. So, that's it. That's why features, basically, they act as these textures necessary for many analogists to mine. Then, yes, can we move to the next slide? So, yes, this is the final image after bilateral sporting and wavelength denoising. So, this is actually the feature. And can we have an understanding in which we have... Yeah, this is the comparison of the original image at the final image. You see a stark difference between the original image and the final image. Actually, the texture is that. But we see that. So, these images have been taken using a copy F80 underwater sensor. So, yeah, can we move ahead? Yes. So, lots and lots of talks spoken about that algorithm. But what's the impact? Like, how can we, you know, sell this product? Or how is it beneficial? How has it simplified life? Because in the end, if we are doing programming, AI or even computer programming, if you're not solving problems, I don't think we are creating that any impact. It should make our life simpler. So, what's the impact? Yeah. Yes. So, this solution was integrated in the AOVs and ROVs present at the research facility at NIO Goa. And it automated underwater explorations using the AOVs and ROVs. So, what we did was, NIO hosted their own servers. So, we deployed this algorithm into the NIO servers and their AOVs and ROVs. So, I'll just give you my personal experience of what I saw when I went into the underwater exploration in the year 2015 with the team. So, the ferry contained two specialist divers, deep-sea divers that would take photographs of the artifacts. And four other NIO folks of which one was a field guide, a couple of scientists. But at this point of time, they only carry one AOV, one medium-scale diver and a scientist, a marine agilologist. So, the number of people, the number of men asked that we have saved is actually three for every underwater exploration that they took. And this exploration happened every fortnight. Most of the time they went for at least once in a month they went for an exploration. So, this is the impact that we have created. The next, yeah, can we move to the next slide? So, yes, at least 10,000 manas saved every year at NIO Goal. So, at this point, so, yeah, this is huge, yes. And they are now trying to integrate this feature into NIO-D Chennai, India, which is, NIO Goal is based off the western coast of Indian Peninsula. And NIO-D Chennai is the start of the eastern coast. So, they are trying to integrate this, we are trying to integrate this solution in this algorithm into their AOVs and ROVs also, this time. So, yeah, that's an acute route. So, yeah, we'll move to the next slide. Yeah, these are the references. And, that's it. There's one thing that we wanted to share. So, what's next? I think we missed that slide. So, what's next? So, at this point, due to the ubiquitous nature of our smartphone cameras, so what Himanshu and I are trying to do with that, we are trying to develop an application that actually react, we are developing it to react native. We have not made the code live yet, live yet and also public yet. It's still a work in progress. We are planning to launch it by the end of April. So, we are trying to develop an application which actually will be integrated into our smartphones. And, we will be able to filter out, we are adding some features into it. When we launch it, let's have, I guess, a new version of it. So, yeah, that's it. I think Himanshu and I have more on the tech stack about the application, what we are trying to develop. So, yeah, please. So, regarding the application, so what we are trying to develop is that, we are making an react native application in which we will make it as a simplistic familiaristic UI in which the user can directly click an image in the underwater and upload it into our servers and then you can select all the images which you want to process and the final images will be processed and send it as an email to that user. So, it will be a very handy application related to the image processing. This is the one. There's also one new skill that we see in most parts of, I'm talking about Indian geography at this point of time because I'm not, because we are focusing on that. So, in the northern parts of India in the month of, in the winter months, there is a lot of fog, fog, fog as well as fog and hazel also there. So, what we are trying to do is, we are trying to integrate this solution with LIDAR as well so that during foggy foggy environments also we can have an application which works and tells us about the obstructions in front of us using live frame capturing and video capturing. So, this is still a work in progress. We are planning to roll out by end of April 2018 if everything goes according to plan. So, we are adding a lot of images in between but I hope things will plan out as planned. So, that's it guys. I think that we can move to the next slide. So, who we are, find us at. So, yeah, any questions? Last slide. So, yeah, any questions? Any questions anybody? I think we are good. Thank you everyone. Thank you. Oh, there's a question. Oh, finally we have a question. Is this kind of solution can be used into UIV? So, it can be used out of the world also but if I want to use into the UIV system maybe we have to Yeah, it can cater the need of unmanned angel system I think that's what you are talking about. So, it can cater the need to that but in the algorithm in the last step was wavelet denoising, right? wavelet denoising is mostly applied into a lower level because like this algorithm can be used by UIVs that's the answer to your question but there needs to be some modification to that because if the medium changes so, I think that's a good suggestion we can keep that in mind. So, if the medium changes what algorithm that we should have that. So, I think in aerial images wavelet denoising will be an overkill because wavelet denoising actually requires a lot of GPU processing and this will be an overkill but if I have to filter, we can achieve your use case I hope I answered your question. Another question we are doing some backcubeset business cubeset we want to do some monitoring for the earth to adjust our direction or the situation for the cubeset so we need a smart monitor you know, your system can adjust the bird right absolutely, it did not get you properly so this is a model trying to develop a soy and did not get tea ok because my camera will watch the earth because the reject the life will be in any other way in any other direction to make our system very poorly so maybe this kind of system maybe can be support to our camera yeah, it can be integrated if you are looking for feature extraction specifically I think you said that you are doing a monitoring right so we need to want clear features so this can be integrated so we have tried these images using the guppy LV0 so if there is a better sensor I think this algorithm will function better sensor as well as better GPUs the one that we used was not that great Nvidia with QDAGT570 is not that great it was 4GB GPU so it was not that great so I think provided the required amount of technical what is called technical diet so I think this algorithm will work but you mean the GPU just mean the power consumption will be more if you are trying to capture more features like I get that if you want to do monitoring from a very high level altitude then you need to have good sensors right, that's one point so that you cannot deny so if you have good sensors the size of the images will be larger so it will require more it's very, it's all larger images more processing power more computational cost but this algorithm with some modifications I'm not saying I'm not 100% sure but with some modifications we'll be able to cater that name and that's what we are trying to also do like in foggy and easy situations that will be an area difference so that's what we are also trying to achieve thank you thanks a lot yes I'm just wondering I think there are several other worrying situations that are probably also that can counter this issue with images to be details so do you know how they approach the problem or is there some other way? during the when the research started we did a feasibility study so we figured out that there are a lot of a lot of I say a lot of codes like some tools there's a tool called AutoPanoGiga so another thing that we are trying to achieve is now image mosaics image mosaic is actually stitched image it's a 3D projection if you have seen Titanic if I create an image mosaic of the sunken ship so it will look like a 3D projection of the Titanic if you have seen Iron Man movies those are mostly image mosaics other projections but image mosaic is more of it will more have the real sense it will not be a single color so there's a tool called AutoPanoGiga developed by MIT folks that actually does that but it's a costly tool but we are trying to open source it that's why we have to force to answer your question yes there are tools that help do it there are proprietary tools that help but mosaics USB is that first of all we are free at this point 10 year open source the second thing is that our stack the stack that we have used is very minimalistic we are using OpenCV with C++ we are planning we are planning to move to Python as well because we are not able to scale at this point in time so yeah that's what I think the best USB is that we are open source we are free that's what everyone needs and we have had access to this this solution has already been entered in this solution has already been used by a national institute of oceanography in India so now they have automated their outdoor exploration so this is already in its life this solution is already in life so yeah thanks a lot thank you you