 Hello and welcome to this session named challenges of using V4L2, that's video for Linux 2 to capture and process video sensor images. Thank you for attending this presentation. I will start by first introducing myself. My name is Eugene Christeve, and I am an embedded Linux engineer at Microchip. I'm part of the MP32 Linux team. We are a division inside Microchip. My main area of developing and focus and interest is regarding the stage two and stage three bootloaders. And I'm also developing Linux kernel device drivers. My main point regarding the presentation. The most interesting fact is that I am also meeting in developing the V4L2 drivers for microchip video for Linux to drivers which are the mouse sensor controller and the mouse sensor interface. So that's about myself. Let's have a small agenda about what I will be presenting today. And the summary states the following, I will start by explaining how digital sensors work, how they work and how they send images. And then I will try to continue with what happens with the data once it gets into our pipeline to hardware into software, and how this data is turned into real photos, as you can see, on your screen in the end of the pipeline. And what can happen during this process, what issues can occur, what challenges we have to see to obtain a better photo, a better quality of the photo and how we can cope with these situations, how we can alter the pipeline functionality and how V4L2, V4L2 subsystem can help us by finding the cause of the issues and to alter the pipeline, the software in the hardware pipeline to get a better picture quality and to solve the issues that we'll be presenting today. At the start I will show you the system diagram of the system that I will be presenting and this is the top diagram, the complete diagram and we will get into details in the following slides. As a small summary of the beginning, you can see how the user interacts through V4L2 system with the hardware and the software, the drivers, and how, where is the sensor placed according to the hardware pipeline, the sensor control driver, V4L2 and the user itself. Regarding user space and also kernel space. So, as I said in the agenda, let's move to the first topic. And we start from the beginning, which means, what is a digital video sensor and how does the sensor works, how did obtain the data we need for taking a photo. And to explain that, let's see the exact functionality of a sensor. On the right side of the slide you will see an explanation of the image sensor and how the lights, the light enters the sensor through those photosensitive cells and how the light is being split into different color types of different spectrum. And we can see that inside the sensor. We have different photo cells which are sensitive to green to blue and to red. And these photosensitive cells then convert the absorbed light into data which we can obtain at a later point. This array that we have inside the sensor is called the buyer array. This buyer array is depicting on the photo on the left side of the slide, and we can see the exact look of this array, how it looks like and how the pixels are displayed in this array, the blue, the green and the red. So this happens inside the sensor in summary with the lights being split into different colors, and then captured inside the buyer array. This by Ray looks in this way and let's see if we can think about what this by Ray involves and why this by race was chosen like this. Again, if we look at the by Ray, we see that we actually have more green pixels than red or blue. And if we notice that we can answer to that question. And the answer is that the human eye is much more sensitive to green light, rather than blue or red light. So the by Ray tries to get more information from the incoming light, considering the green light rather than the blue and the red. And why this pattern was chosen. It was chosen to make less cost and be effective and to make it simpler to develop afterwards. So we see this by Ray we can ask ourselves, if we lose color information during this process if the pixels, the distance between the pixels is too big or what happens if we lose color information considering one pixel is blue one pixel is green one pixel is red. We will see that in the following slide. What happens with the color information and how we managed to not lose color information from one pixel to another. What we can do with this is information and convert the pixels from the by Ray into a real photo. If we look at the down part of the slide, we can notice that this photo is actually an interpretation of the by Ray. But seeing, we can see it exactly as the by Ray sees it, we can distinguish the green pixels, the red pixels and the blue pixels on this photo. And while we look at it we understand what's in the photo but it's not really does not really look like a photo which we can normally see and normally take with a photo camera. So the, we will see the process of turning this by Ray this photo into real photo which we can use. And after the photo cells get the information the light information as I said, actually the sense or uses a analog digital converter and we see the information as beats. So, this was the talk related to the by Ray, I explained how what is a by rain how it works, how it captures light and how this light is turned into bits. But with this pixel data what happens next, what do we do with this bits. So we can see a real image. And the answer to that, we're looking at the by Ray and to solve this problem is what is called the buyer interpolation. The by interpolation is a process in which each pixel from the by Ray will get information from the neighbors, neighboring pixels to get enriched and obtain data such that each pixel will have all the channels like we are expecting from real photo. We can see on the left side, we have a photo that is a buyer array can see the whole pixels the difference between the pixels. And on the downside of the slide on the bottom part, we can see the pixels split per channel. So we have a red channel green channel and blue channel for the by Ray. So in the green channel there are both green channels, and on the middle of the slide on the upper part we can see the photo which was taken after the buyer interpolation. So this by interpretation process is the process in which actually every pixel from the by Ray will get information from the neighbors. We can see the data such that in the end we have a photo that looks exactly what we expect a photo which is not pixelated and not split per channel is in the by Ray. So this interpolation process is done in the pipeline in hardware. By hardware dedicated block they will perform this calculation for each pixel on the incoming pixel stream. So we understand what is in the buyer interpolation we see what happens when we take pixel data off the sensor. But the issue and the challenge that we have here with this interpolation we ask ourselves, if this process is flawless, does it have any issues, can we find some problems with this process or not. To answer that, we see, yes, we have some issues with interpolation and what can happen, the pitfall of the interpolation process or the demosasing process is the fact that on edges we can have strange artifacts appearing in the photo. If we closely look at the photo which we have on this slide. If we notice on the upper part there are some strange artifacts, and maybe you have not seen it yet but I will try to put another photo which is zoomed in on the artifacts I wanted to show you and circled in black. You can see that on this edge, there are specific light artifacts and pixels that should not be there. Some strange colors pixels on the edge. And why this happens on the, because of the binary because of the interpolation. It's because the pixels which are right on the edge will get neighbor information from the neighbors which are on the other side of the edge. So the edge will be mixed up between all the pixels at the edge we do not have clear edges with this interpolation algorithm. So what can we do to solve that we can see another photo of the same scenery or seem very similar scenery in which the artifacts are missing and this time the artifacts have been fixed. And what we can do about this is in the dedicated hardware, we can compute the fact that we have an edge we can detect edges. Sometimes it's possible using a special algorithm to detect if we have edges inside the photo, and we do that by actually seeing if there are pixels of the same colors and there is a very big difference between a lot of pixels on one side and the other side of the edge. Sometimes it works sometimes it does not not all the edges are fully detected at all times, but these algorithm will try to prevent the strange artifacts which I showed previously inside the system what happens is that the image sensor produces the pixel stream. We can see on the left side of the screen, and the buyer interpolation and the education mechanism inside the hardware pipeline will be responsible to do the interpolation hardware detection. And the user can interact with this using the video for us to subsystem interface, which is around the left side, the resulting image is then taken by the user from the kernel space through the special character device. So this is a small diagram of exactly how things happen inside Linux inside the system. So let me explain another issue that can happen during the process of image acquisition of image processing during the hardware and software pipeline. And this one problem which I want to explain is what I call the color problem. And this is closely and tightly related to how we see the light. Normally we see light as a single entity just light we call it light, but in fact, what they will tell you is that the light has a temperature. And this temperature affects the way we see the light and the way the sensor sees the lights in this photo we can see that specific light on the left side has a specific light color, which is more orange more yellowish. And we can see on the right side that the other light is more blueish, and the color temperature of each of these lights is on the left side is 1000 Kelvin and the right side is 10,000 Kelvin. We can see a very big difference in the light color affects how we actually see the objects colors. And we can move further and I will try to show you a photo is take an example of a photo and we can see we have a close look at this photo we can watch it for a few seconds. Not no problem with that and can see it can distinguish rock, some water sky some trees. That's okay we can distinguish that can see it. What I will ask you now is what is the color of the water in this photo. What is the color of the rock, what is the color of the sky. Maybe we are tempted to say that the water is blue the sky is blue the rock is white. That's one thing to say about this photo you can see that I will ask you again, the same thing on this photo which is actually almost the same photo but not quite the same can see some differences in color. What I said in this photo the light and the color is much more natural. We can see the exact blue water the blue sky and the white rock. Is there any big difference between the other photo. And why is that. Actually, what I'm trying to tell you is that we also see with our brain just with our eyes. So let's look at this black and white photo, which is again, simple photo taking black and white, and can distinguish a sky and maybe some trees or some bushes and some rock on this photo. But do we understand what's in this photo do we see the colors there is black and white we cannot see the color that clear but our brain can understand the colors there. So that's the picture with our mind that we have some green bushes and the blue sky. If we remember from old times photography long ago that we used to have what is called CPI photography, and again the same photo we see now in CPI. And if you look at this photo I will ask you again, do you see colors in this photo or not. Maybe you will say okay there are not many, many colors but then I will say what's the difference between the CPI photography and the black and white which see previously. And I will tell you that CPI is again a monochrome photo so CPI is a photography made of shades of brown, while black and white is made of shades of gray. So CPI is not a color photo is just a monochrome photo, but our brain can understand the colors even if this is a monochrome photo. So what I'm trying to emphasize with all this talk regarding colors is that our brain can see much more than our eyes can see. The sensor does not have a brain and needs to be teached to be understand what is the light around it and how it can adapt the coloring to the specific light that is seen in the scenery. How we do that, how we can do that. This is a process which is called white balancing. And again I will show you the picture side by side to see the differences before white balancing and after white balancing. More examples with the same photo which we saw in black and white in CPI. Another photo before white balancing and after white balancing. More examples of what we can see how it's adapted to specific coloring to specific light inside and outside. As I said, we need to teach the sensor to adapt to specific coloring. And how we do that we will see in the following slides. I will ask you another question, one more question this time to see how we can teach the sensor. And first question that comes is if we look at this photo, which you can see on this slide, I will ask you what is the average color of this photo. You can see a simple photo and the question is what is the average color. We look at the photo and we see white, we see black, we see some shades of gray and in the middle of a big patch of gray. If we sum them up, all the colors here and think about the average with natural to conclude that the average color of this photo of this frame is gray. This is natural if we consider all the colors in the frame. I will move to another photo and this time this frame is full of colors and I will ask you the same thing. What is the average color of this photo? If we look at it, we see again the bottom part, some shades of gray and on the upper part, we see a lot of colors, some red, some blue, some green. What happens if we add them up? Some patches miss red, some patches have reds, patches miss blues, patches miss green. But the surprise is that if we patch them up, if we add them up together and complete the average, again we will see that this color card has an average of gray. And we will use this thing that this average color of this photo is gray to learn the sensor how to adapt to colors. And this is done with what we call the gray world assumption and the gray world algorithm. What we will teach the sensor is the fact that the gray color is gray for us and it means that it must be gray for the sensor as well. Of course this must be done in the ambient light, so the lights must be taken into account in this calculation. So for this we use this color card which we see in this photo, which is exactly what I showed you previously, the gray assumption. So actually we take the assumption that every scenery which is diverse enough is gray. How we will implement it in our driver in beautiful Linux to adapt to our driver, our pipeline, our hardware or software to teach the sensor to understand that we have a gray scenery and adapt our gray color to be gray for us as well. So we will take this photo which we see on the left side, which you see previously in the presentation, but now we will have a closer look at it. And we see that this photo is somehow greenish with a very low blue. This is visible with the naked eye. And actually we want to emphasize the gray world we want to adapt this scenery to be gray in average gray in average means that all the components of the photo the green, the red and the blue, have the same amount in the average. So we use what is called the histogram, the histogram is completed by the hardware, and the histogram tells us exactly how much of each color is inside of a frame, the photo. If we look on the right side of this slide, we will see the histogram for each channel. We see a histogram for red for green and for blue. This histogram is actually a representation of how many pixels of each value we have in the frame. We can see looking at the histogram that there are more pixels of high value of green rather than blue. So we see that green is predominant in this photo by looking and judging by the histogram. We have also read but the blue is very low. So, looking with the naked eye the photo we see the same thing, the fact that there is little blue in this photo, and that there is plenty of green. So doing this computation, this histogram, we can see this. So what can we do to adapt our hardware and software to solve this problem and to adjust the photo such that it will look fine. We apply the gray world algorithm that everything is gray. So we need to adjust this histogram to look the same for every channel. When we do that we compute the average of the photo, the average of the gray, and then we adjust the red and the blue and the green to make it such that they are aligned. And we divide the sum of the red, the sum of the blue by the average and we compute two things which are called the gain and the offset. The gain is a multiplier on a channel and offset is a constant which is added or subtracted from the channel. The channel is associated with the gain and an offset. So we'll actually, the gain will multiply the channel, it will increase the values, while the offset will just add to every pixel. So once we apply the gray world and we compute these gains and offsets and we apply them on the channel. We obtain the photo which we call is white balance adjusted. And if we look again at this photo which is white balance adjusted and we compute again the histograms for this photo, we can see on the right side that the histograms for the channels are nearly identical. And what this means that the histogram for the channels are nearly identical is that if we sum them up, we actually obtain a gray. So the average photo of this white balance adjusted photo is great. So this is what we wanted to obtain with the gray world algorithm and looking on the left side photo which is now fixed looks much better and the colors are much more natural. But what we expect when we look at the scenery and the photo which we take with our camera to do this in a video for Linux and how video for us expose is this interface for us is done through the video for use controls. And here I have a slide showing the exact values of the controls. It may look confusing at start but if we take a closer look we can see the exact gains and offsets which I was saying previously have a red component gain, a blue component gain, green red component gain and green blue for the buyer array remember you have a green cell on the red row and a green cell on the blue row. So we have four channels in the buyer array. And by default, these have a specific, you have specific values which are 512 for the gains and zero for the offsets. These are the default values. So we apply the gray world algorithm by doing the do white balance procedure, applying the great work one time with the for instance helps us with this control which is do white balance we press this control and we have the values obtained. So we see on our photo that the game for the blue for example now is 3000 can see it with red in the slide, you see it increased a lot, and the green component offsets are negative. So we see that the gains and offsets have been adjusted in a way somehow we expected this, we expected that the blue is increased with a high gain and green is reduced by using a negative offset so this is what we actually expected by looking at the histogram. So for Linus controls helps us with this implementing this on the photo on the channels. And we also have a control that will do this great world adjustment for us in the driver inside the hardware pipeline and software the real world algorithm. It looks from a command line perspective but what happens when we have an embedded in this camera. It looks maybe something similar to this maybe you have seen this on a camera you have an exactly white balance button. And this exact same thing happens when you press the white balance button, it will auto adjust the gains and the offsets for you. Using the difference control behind it will just press a button which in fact leads to the same thing and call to the different API that goes to the driver and will call the white balance algorithm inside the driver. So this is a clear picture of what the white balance button does for an embedded Linus camera. What happens with cameras usually is what we call the auto white balance is actually a simple white balance that is performed continuously all the time such that if we moved from a scenery to scenery to adjust automatically to the specific light. You can see this even if you expand your smartphone you will move your smartphone from one light to another, and you will see how the white balance adjust to the specific light if you move from indoor to outdoors for example. Another way to present the gains in the offsets if the command line is not really very clear. You can even make a GUI for this with sliders that manually adjust. The white balance has also some drawbacks. See, for example, the scenery is not really gray, maybe you're using your phone inside of a red patch photo or red box or something like that. It will auto adjust to get the red part of the scenery into gray, which is pitfall of this algorithm which is not perfect, of course. It will be improved by different other aspects like detecting the gray object inside a photo or maybe do like Photoshop is doing, doing two white balances, one for black and one for white. And of course you can experiment and do manual tuning as I said earlier just move the sliders and see the effect it has on the white balance procedure on your camera and on the resulting photo. And to complete this chapter regarding white balance. I also added a part of the diagram regarding the system, and you can see how the user can interact with the white balance module inside video for news inside the driver. So the user space through an user space application will call the interface API. And actually the sensor control driver will be called to adjust the white balance with the gains and the offsets. And the pizza stream coming from the sensor from the previous stage of the pipeline will be adjusted according to the values inside the hardware. And in the end the resulting image will be taken again through the interface to the back to the user space to the user. And the small diagram of what happens inside the system when the user adjusts the gains and the offsets from a video for news control and the video for news perspective. This was the discussion related to white balance and how we can teach the sensor to adapt to white balance and to the temperature of the light of the scenery. What I will try to explain next is another challenge another issue that we can have with our sensor image capture and image acquisition is the what you call the quantity of light. And the question is, does it matter how much light we absorb during our sensor and how can this affect the photo that we take, and how can we solve that or how can what algorithm we can apply. And then we've tried to find a solution and how can the driver or the hardware or video for news help us to obtain a better quality of the photo that we take. So, let's have a look at the following photos. On the right side, we have a photo which we can see that it has a lot of light in it, there are a lot of white pixels. And on the right side, we have a much more natural photo is much more clear and much better. And what I can tell you it's that the photo on the left side is overexposed, meaning that the pixels are much more saturated with light. And there are many white pixels, so you cannot really distinguish anything from the white pixels in this photo. You can see another type of photo in the left side again you have a photo which is with very little light. And this time, you can see that we have a lot of black pixels so the pixels are not sensitive to light. And in this case this photo is underexposed. And you can see a clear difference between an overexposed photo and an underexposed photo compared with some what normal exposed photo. And once we see these pictures we can see the challenge of saying how much exposure we need to select for our sensor in our pipeline or hardware, how we can configure that how can select that and is there a way for the video for links or the driver to do this for us. The answer is yes. Again, we can use our friend histogram which can help us to understand how much light we have in a photo and if we can adjust our picture and our frame to make it better. And if you look at the two photos which we looked earlier, one is overexposed and one is underexposed. Let's compute a histogram. This time we will not complete a histogram for each channel, the red, the blue and green, we will compute complete histogram, a sum of the whole histogram, the whole channels together adding them up. And we see on the right side that the histogram looks in a specific way we have a lot of pixels which are very very high value. In the dark photo which is underexposed, we have plenty of pixels which are dark, which are black. So the histogram looks like this, we're overexposed and for underexposed. Let's see how the histogram should look like for a normal photo, and our normal photo we can see that our histogram is much more aligned towards the middle. So that means that we have pixels which are middle range, not very exposed to light and not very dark. So our goal would be to use this histogram to adjust the gains and the offsets, actually the exposure for this photo for the incoming pixel stream, such that we obtain a photo that we can actually use, a photo we can actually see, not too dark, not too bright. So again, we can use video for Linux to control our exposure directly to the sub device to the sensor, such that the sensor will expose the photo cells more or less for each incoming frame. So again, we have a video for this control with an exposure setting. This can be modified from a command line from an interface from an API directly to video for Linux. And if we look at the camera, maybe we know how to use exposure compensation to increase then decrease it directly from a button, which can do that for us looking at the camera so at a high level on the camera it's a button. So the API is a call to the system that will modify the exposure for us in the whole from the sensor from the whole pipeline that we obtain. So this is related to exposure that is directly involved with the sensor on one other aspect which I wanted to explain is related to brightness. Brightness is a beautiful news control which can be modified through the API. And I will show you a photo taken on this scenery with a positive brightness applied. So you can see that the pixels have a positive brightness. I will show you exactly what that means positive brightness applied but we can see that the photo is pretty much light and white. So negative brightness applied. We can see the photo is very dark, as in the same scenery applied with a negative brightness on this frame capture. This is exactly what I wanted to show you is the fact that brightness is an constant that is added to the luminosity path inside the pipeline. The actual value of the pixel is being increased or decreased with the brightness value coming from the sensor. So we have this brightness we have exposure that is presented earlier. And one question that can come to our mind is, why do we need exposure if we have brightness, and why do we need brightness if we have exposure. The thing is that actually we they are not the same thing, because the exposure will allow more or less light coming into the sensor. So we need to have more information about the light incoming light, while the brown brightness will actually remove some of the information some of the entropy that we receive from the sensor. And certainly if that we have an overexposed photo or underexposed photo, regardless of what we do with the brightness we will not obtain more data from the sensor. If we have only bright pixels, regardless of what brightness we apply, we will get still pixels which are saturated. And certainly if our pixel can detect color or light on 10 bits, we will only use one bit or two bits lose color information lose luminosity information by having a photo overexposed or underexposed. So we need both brightness and exposure to obtain a good quality of photo. Once we have exposure set correctly we have enough pixel data, so we can apply brightness, negative or positive to obtain a better photo. This was the difference between brightness and exposure and how can you both obtain high quality photo. This happens with pixel data. We can see also on this small skin that you also have contrast I especially left this on the slide, so we can move to contrast which we can see that contrast is actually multiplier applied on the luminosity path. But we expect that contrast also adjust its colors. So if you look at the bigger picture we can see that brightness can apply as a constant towards our luminosity path, brightness yes, and contrast will apply as a multiplier to both brightness and colors. So if we make it simple inside the harder or inside the software we will use what is called the YUV representation in which we convert the RGB space to YUV. And we have a separate path for luminosity and separate two paths for difference from blue and difference from red is called CB and CR, and these are multiplied with the specific contrast. Again Vido for Linux can help us to control this harder block through the interface through Vido for Linux controls and can help us to try to obtain a better quality photo by adjusting these sliders, these knobs on the interface. And we can also see how the contrast can be applied to a specific photo and what are the differences in what is the effect that can have on the specific photo. So if we look at this photo this has a small contrast, so this contrast is small, the multiplier is somewhat subunitary that means that values will be reduced. And we can see that there is not much brightness and there's not much color in this photo with a low contrast applied. Higher contrast, we can see that there is also luminosity because contrast is a multiplier on the Luma path. And we can also see that the difference in color is greater, the difference in colors is greater, because we also apply the contrast as a multiplier between colors on the CB and CR paths, the chroma paths in the YUV representation of the colors. So this was the discussion related to contrast. Again, Vido for Linux can help us to modify and to alter the pipeline such that we can expect or try to experiment with our pipeline to see if we can obtain a better quality of the image. Again, brightness and contrast from a very high level user perspective can be seen on an embedded Linux camera just by having this fancy menu interface where you can just select with some buttons the brightness in the contrast that you wish to be applied. But behind the scenes again Vido for Linux API comes in and alters the pipeline through the driver directly to the hardware, underlying hardware. So inside the system, what happens is that the user through video for news control will alter the hardware block, which is corresponding to the exposure correction contrast and brightness settings. Actually it will call the driver sensor control driver, which will compute the necessary adjustments and will configure the hardware accordingly, such that the pixel stream that comes out from the hardware pipeline is somewhat more. The user expects to see the resulting image that is then copied to the user space and the user can actually take this photo and then use it later on and have a look at it with a display. So this is what happens inside the system. In the, as a summary as a first summary of what I wanted to show you today is a small explanation about the fact that digital sensors need tuning the sensors just capture light converted to binary data and then what happens in in hardware and is that we can use a pipeline that is made of several modules that affect the pixel stream. And this pipeline is also in hardware but also in software and video for news can modify or alter this pipeline through an interface or an API and help the user to obtain better quality photo. So today, some several issues or challenges that can appear during digital photography and how can a driver or specific hardware product or specific software can try to expose such settings to the user as we do for Linux captures the images and sends them to the user, but it's somewhat somewhat agnostic of what happens here really excited image. We see what happens with the edge detection with interpolation how this can affect the photo. We see what happens with white balancing brightness exposure contrast simple things are not so simple things which can affect digital photography, how we can use Linux and embedded system camera that can help us in obtaining better digital photography. And we can see that with buttons with sliders with command line with a real camera that is in a box in fact has a pipeline inside the hardware, and how this is exposed to the user space can be done for example like in this photo on the right side of the slide, which can see the exact sliders which I exposed during the presentation, how they can be adjusted such that the user has a more visual interpretation of the controls. The kind of summary is the exact complete view of the pipeline of the driver, and how the sensor control driver can affect how the user can interact with the system to alter and to solve such kind of issues and challenges. Normally, things are not really seamless when you take a photo you just look at it. There are a lot of plenty of things happening behind the scenes, and there are several parameters which can affect a lot, the quality and the result of the photo. There's a small summary of as a diagram of what happens inside the system. And I will try also now to show you a small demonstration of exactly some things which happened which I discussed today during the presentation. And for that I have a harder dedicated pipeline next to my side at this moment. And I will, while you see me on this webcam live preview, I will also show you another live preview using our dedicated pipeline. And to see this, you will see a live preview, which is exactly of what my camera is seeing right now. So I have next to me a camera with a pipeline. And it takes frames. It performs the viral interpolation. It converts into RGB. It makes edge detection. It performs color correction, white balance, gray world algorithm. And it's the photo is then streamed over the network. So you can see it. And you can also see this time you can see me as well on this photo. Hello. And I will show you some examples of what happens in the ambient scenery. As we discussed, maybe you remember this little friend, which I show me in the presentation. And you can see, if I get this card card nearby the camera, how the white balance algorithm will apply and the auto white balance will adjust and try to understand how different colors are adjusted. You can see now the light becomes maybe more blueish, because at some point it will detect, it will try to adjust the gray and we do not have an average gray in this scenery. You can see how the algorithm will auto adjust itself to the ambient light. Now it's more greenish and now it's adjusted. It's adjusted again to the ambient scenery. The color checker card can help us with that. So once we have the color checker card inside the full frame, the colors are perfect according to the gray world algorithm. Once we remove it, it will try to adjust to what's in the scenery. Maybe it's gray, maybe it's not gray, maybe if you see something like this fully red, it will try to adjust this red to gray, which is a pitfall of the algorithm. This is what I wanted to show you related to white balance and what we discussed today during this presentation. I hope you enjoyed this live demo, even if it's not on the moment, but it's live right now as I'm recording this. This is all related to the presentation. We have separate questions, Q&A section, and in the end I provided with resources related to where is the driver inside the Linux kernel and links to all the photos and the board that was used to capture the scenery that you've seen in today's presentation. So thank you very much for attending, the challenges of using Vito4News2 to capture and process video images. I hope the presentation was of interest for at least some of you. And I wait for you for the live Q&A session after this presentation. So thank you.