 I think we're going to start. Welcome everyone. My name is Marco Antonio Gutierrez. And I'm part of Upcode Academy. I'm in charge of the robotics and hardware technology area of the company. And I'm going to present you another view of how to learn how to build self-driving cars with the strong, we will mainly focus on the computer vision part, which is mainly like the first step into this world. First, I want to ask people here, how many of you know how to code? Hands up, hands up. Who knows how to code? Who knows Python? Who knows Python here? OK. OK. That's fine. For our courses, we have an introductory Python course, which is actually very good if you don't know how to code Python. We do a lot of Python on the self-driving car, especially for prototyping and fast learning. It's very good. In the end, C++ will be needed as a language. So we're also working on a C++ course that will give you advanced knowledge on that field. Because it's actually a skill that is actually required in the self-driving car industry. So I'm going to start a bit of an interaction of why self-driving cars. First, we all agree that cars are a good thing, right? Cars are very good. They get us places. They help us move. There's a lot of cars worldwide. They help us with transportation, bring stuff from one place to another. They shape cities, like cities are built around cars, roads, and everything, right? But has anyone been to Manila? Like, have you guys been to Manila? It's pretty awful there, right? Traffic is awful. Filipinos spend 16 days a year full days on traffic, which makes them lose each of them more than 2,500 SGD in income. But it's not even better in Singapore, either. So here, we spend 30 minutes in traffic when we commute. But the ones that have to park, they spend 19 minutes looking for a parking spot, which is not good at all. But it even comes towards, like, if you've been to Bangkok, right? Like me, pretty much, probably a lot of people have been to Bangkok. Then Thailand is the second worldwide country in deaths on traffic. 24,000 people die annually in Thailand just because of traffic. And because of this, Thailand loses 3% to 5% of its GDP. In Singapore, it's better because we don't have that many cars, but it's not good either. Singapore has 10,000 injuries each year, which if I'm not wrong, it's like 600,000 cars, which makes it like around 6% of those gets involved in an injury, which is not actually very good. And 160 people die, which is not good either, right? This is really bad. This is like a really bad technology. Cars are the technology that is causing more deaths in the world, a technology that is not meant to kill people. It's cars. So we should do something about it. It's one of the top 10 fatalities in the world. So how can we help this, right? Self-driving cars. We can help decrease these numbers, and we can increase safety. We can save time for commuting people through self-driving cars. This is a picture of a self-driving car, and these are mostly roughly the skills that are involved in a self-driving car. So the basic skill and the one that we're going to focus today is the computer vision part. The other skill that I already talked about is C++. This is very demanded, pretty much because all the modules that go into a car have to be very efficient. And using this language, it's actually a way to make them very efficient. Then we also use a lot of convolutional networks to help with the computer vision part. We need to do vehicular tracking. We need to do deep neural networks, behavioral trainings, semantic segmentation of the environment of the car. We need to do assistant integration. We need to take care of the functional safety. We need to do path planning so the car knows how to go from one place to another. We need to do self-navigation for the car. We need to have control modules. We have to do the localization of the car in the environment. And we have to do sensor fusion, because the cars have a lot of sensors. And these information have to be put together to make and take decisions autonomously for the car. So today, we're going to focus on computer vision, as I said, because I think it's a very important field, a very interesting skill, not only for self-driving cars, but also for other purposes. And it's a good entry point to the self-driving car industry. But why computer vision? Computer vision, it's growing a lot. The number of sensors that are out there, it's growing exponentially. There's this cameras, even like probably our cell phones are two to three cameras right now. Computers have cameras. We have cameras on the streets. Everywhere is cameras, right? So there's a lot of information being captured there. And this information has to be retrieved. It is an solved problem currently, how to retrieve information from these images. There's so much information that we want to take that we're not able yet to obtain all of it. For some statistic, this Cisco Visual Networking Index, you can check it out on the internet. And it says that by 2021, the amount of videos that you upload in one month, it will take you five million years to watch it. So the amount of video that is going to be on the internet is huge. So we need ways to automate this information. 82% of global traffic is already video. So there's a lot of video out there. Videos of variant traffic increase. And then it is expected to have 20 times more of virtual augmented reality information going on on the internet. If we look at the growth of YouTube, this is a bit old, but already by 2016, you can see there's 800 hours of videos are uploaded every minute into YouTube. This is roughly above 13 hours per second. So I know Google has a lot of people, but there's no way they can manually tag all this video information. So it's very essential to have the tools to actually be able to annotate this information. Another interesting thing that you can have a look at is that according to this research, the computer vision engineer job will be the most IT job in demand by 2020. Because there's going to be so much information that is captured in images. And that is going to be out there on the internet that needs to be retrieved. Like Google right now, it's storing pretty much everyone's pictures on Google photo. But they need the means to retrieve information for those photos so they can later use it. Another important is that that is actually currently. You can get up to 200K SGD salary in the US with these skills. So it makes it pretty interesting, right? So I'm going to start this workshop by breaking one of the first rules of the presentation. And one of the first rules is that you shouldn't demo. But we like to break rules here at Apocard Academy. So we're going to go ahead and have some fun. This is a video that I took. It's not the ideal video. It's not a video of a sensor. It's just a video that I took from YouTube. It's coming from Singapore, Changi Airport. I don't know the road. I don't know the name of the roads in Singapore. But it's just like basically, if you look at it, it's just a car driving down the road. The camera quality is not very good. We can try to manage to make some work with it. So what I went ahead and built is this algorithm that is written in Python. For those of you who don't know Python, this is loading some libraries. I forgot to say, those of you that have the presentation, you can execute the code by pressing Shift Enter. You can go forward and backwards with the space bar. And then you can actually switch between presentation mode and the notebook mode with Ctrl R. So I'm going to go ahead and execute this, Shift Enter. And then as you can see, the little number up there changed. So the libraries are loaded. These are the main libraries that we're going to use. These are the set of functions that I actually prepared for this demo. I execute them. And then this is the actual algorithm that is doing some lane detection on the video. This is actually taking the video that is called SingaporeDrive.mp4, and then it's detecting the lane lines of the road. And it's outputting that into detect lanes. For those of you using the server, I'm doing this locally, so it's faster. It will take a bit more time, but you can probably see the results yourself. Yeah, it'll take a bit of time. As I said, you can switch from the presentation mode to the notebook mode if you want to see it in another way with the Ctrl R. So it will be easier for you maybe to check the code and everything. And this has got another tool for code folding. So if you press on the side of the functions, you can see the code folding and unfolding. So it's easier to see. So this is the result that we got. And you can see in blue that we're detecting the lines. So this is our first step. So the car knows where he should be. This is our first step into the self-driving car work. So how do we do this? Let's learn a bit of computer vision for this task. I'm going to explain, if we have time, I'm going to explain pretty much everything that I've done to do this lane detection. Later on, if you want, you can go through the code. It's available on GitHub. Or it's also on the notebook. So you can have a closer look if anything doesn't make sense now. If you have any questions, just stop me at any time. I'm more than happy to help you out. So the tool that we're using here, there's different tools in computer vision. But one of the main ones, it's OpenCV. It started Intel. And it's a really good library that it's written in C++. But it has Python bindings that we can use. So we actually make use of the Python part. And it's mainly targeting image processing. So this is what we're basically doing here. We're going to use this library just to do the image processing. So as I said before, this is the set of libraries that we're loading. The most important ones are the import CV2. This is where we're loading OpenCV. It's called CV2 in Python, because it's the second version of the API. We're using Matplotlib basically just to show the images in the notebook. Because you can show images with OpenCV, but it's not supported on the notebook. So we're using this library. The NumPy is also another important library. It's NumPy. It's a scientific library for Python. It's the main scientific library. If you load Python, you probably know about it. If you don't, you will see it's everywhere. And because images in OpenCV are actually NumPy arrays. So it helps us to work with the arrays. The rest is just like this is basically a list for storing. This is for the processing of the video, and OS is for using paths in the code. So that's basically the basic stuff. But let's start from the beginning. What's an image, right? This is an image. So this is an image of a letter A. And you can see on the right, there's a bunch of numbers. So each one of the pixels has a number. It's a grayscale image. So we only have values from white to black. So a white will be a one in this example, and then it will go down to zero. So depending on the intensity of the pixel, you have a different value. That's basically it. When we have what unfold this, this is the way I'm loading an image, and I'm showing it. When we have the color images, we will have three values per pixel. It will be red, green, and blue. This is the way that we load a picture on OpenCV, just CV2 in red, and then the path to the picture. And the function that I'm using to show a picture is basically just to show the picture using map.lib in the notebook. So you can probably ignore that part. So basically, we're going to fold this. I read the picture here, and I saw the image. So if we execute this, Shift-Enter, you should get the image. This is one of the image that I took from the video. So we can see the process of how we detect the lane lines here. So we have an image. If we go back, let me go back. If we go back here, wait, I'm just clicking here. If we go back here, image was loaded and saved into the variable image. So that's where we have our image. So if we check this image, and then we check the shape, we execute again with Shift-Enter, and then this is the shape of the image. We have an image of 720 per 1,200 AD pixels. And then we have three values per pixel, which is the red, blue, and green. Then if we want to access the first pixel, then we execute this code, and then we can see the three values of the first pixel. It's a 64, 70, 79. You can do it yourself. You can change. If you come here, it's not actually very amazing this, but sometimes it works. And then you can get a different value of a different pixel. You can move around in the matrix, right? Once we know that an image is a matrix, I'm going to introduce the concept of masks. So masks are basically images or matrix that have this shape. So we are having some pixels on black and some pixels on white. And it's good to use as an output of algorithms so we can actually select parts of the image. So if I have an algorithm that is giving me some output, selecting some part, I can create a mask and then select the part on the image. So we're going to use this to select the white because lanes on the road are white. If we set up some numbers, this is the values of the red, green, and blue. So in OpenCV, they go from 0 to 255, 255. So we basically set up a lower threshold and an upper threshold. And then with this function called InRange, you can actually tell OpenCV just to filter the image and just get the pixels that are into these two values, the lower and the upper value. So with that, we create a white mask that will have only the white part of the image. And then we select that and return it. So if we execute this code, we should get the white on the image. But this white actually is not very good. It's actually the sky of colors. We think the lines are white, but if we look at the numbers, they might not be actually white because of light or because of the camera position, different external factors, a lot of noise. So the values are not exactly the white. It's not always white. So we have to find another way to do this. So for this, we're taking advantage of the different ways of representing an image. We talked of the RGB way of representing an image, which we have the red color, green color, blue color. And then we have all the colors with that information. But we can do it with the hue, light, and saturation. And we can do it with the hue, saturation, and the value. For that, we have to change the representation. It's basically just changing the values. And for this, actually OpenCV give us, I think it's more than 150 functions to change between these representations. So I'm going to try to convert this image into HACV and then see how it looks. So if we convert it, this is what we get. And it looks a bit better, but it's still not very good to detect the lens. So I'm going to go ahead and try with HALC. If I execute this code, I get this information. This is the image. It seems that I can probably detect the lens using the green lines here, which will be, if we go back and look at the HLS representation, if I probably use the upper part of this representation, which is the one with more light, so the higher values of the light, I will probably get whatever is close to white. So if I go back to this image, this is what we got. And what I'm doing here is the same thing that I did before with the RGB image, but I'm selecting only the light value, which is in the middle. I'm taking a value that is not very light, like the sky, but it's kind of in the middle. I did a bit of trial and error with this code until I got the actual output that I was looking for. So it's basically the same. We're making a white mask. It's just that if you look at the first line, I'm converting the whole thing into HLS. So I'm going to fold this. I'm going to execute, shift, enter. And this is where I get. So we have a lot of noise up in the sky, but still, we were able to get the lanes. That's the first step. We're going to move forward and try to select the edge of these lanes. So to select the edge of these lanes, we're going to use this function that is called CanEH. I put usually in the slides a link to the documentation. So you can actually click there and go into OpenCV. It will get you to the OpenCV documentation. And you can read the parameters about the functions. And you can actually have a look at the tutorials that they have. They're very amazing. They have really good Python tutorials on pretty much everything. So for the CanEH, it is a multi-stage algorithm. Basically, use the Sagaussian filter, which is basically making the image blurry. We'll go into details in a bit. But for now, you just have to think that it's making the image blurry. Then what this does is it finds the gradient direction of each pixel. A gradient, it's a vector that points in the direction of the magnitude that changes. So basically, this is an example of a gradient. So the magnitude of the pixel is changing this way. So we will get a gradient this way. So that seems like a good way to find edges, because we just have to look for pixels that are a local maxima in their neighborhood. So the ones that are very black and then suddenly there's a white. The white ones are around. That's probably an edge. And then it uses what it's called a citizen stress-holding, which is basically using mean values and max values to determine if the pixel belongs to an edge or not. So we will explain this a bit more in detail later. So canEH is basically what you have to get home is they're good for edge detection. They're good for pre-processing images prior to lanes or shape detection. And then the major drawbacks that you can get is that they're very sensitive to noise. That's why we use the blurring before. And you need to use thresholds. So that's the usual problems with thresholds are also applied to this, like your threshold is very high. You have to try different thresholds, stuff like that. Some important tips is that the smooth thing, the blurring, it really helps. So it's very important to do that with a correct number of the kernel that you'll see now how we do that. So it's very important that you use larger kernels. It will work better. But if you use smaller kernels, it will be faster. So we're gonna see how to use kernels later. First, we're gonna convert to grayscale, the image, because the canE algorithm measures the gradient. So if we convert to grayscale, we can get these vectors of gradient properly. So we're gonna go ahead and execute this code. This is basically just doing the CVT color functions from open CV, and it's converting the RGB image into gray. So if we execute this code, we get the same image, but in grayscale. It's just one value per pixel, as we said before, right? So now we get to the Gaussian blur stage. So we can do the canE edge, right? So Gaussian blur is the function that will blur, they make our image blurry. And this way we can remove noise from the images. So to see how this is done, I'm gonna give you a rough idea. This is not, I'm not gonna explain the whole maths behind it, because actually you don't really need to know all the maths. Even I don't know them, I forget, I forgot them. I started at some point, but then I just know how to use this stuff. And then if I really need to know them at some point, I go and look it up on the tutorials or Wikipedia or something like that, right? So the most important part is to know like the rough idea of the function and what you can do, what you cannot do. This is the way a Gaussian blurring works and it's basically, this is what it's called, a convolution. It's very important in computer vision. You will see like convolutional networks all around the convolution, convolution. It's very well known now. And basically it gets a kernel and it operates the kernel with an input image and it gets another image. Through the kernel, you get like an average of these pixels, the matrix of these pixels multiplied for the other one and then you get a value that you put on the resulting. So to get a rough idea, basically it combines all these pictures into one and then it blurs the image. And then the kernel is a Gaussian function. So for those of you that don't know what a Gaussian function, this is a 2D Gaussian function, which is the shape of a Gaussian. So the values in the kernel will have the biggest values in the middle. Because the lowest values are around, right? Because the intensity changed rapidly in the edges, we want to make them smooth to reduce the noise. This is the whole idea of the Gaussian. And the kernel size can be selected, right? This is what we showed here. The kernel is the thing in the middle that we used to make the image blurry. The bigger the kernel, the bigger the blurrier the image and the more noise we can remove. But at the same time, the computation will increase. So if we are processing videos as we do now, our video is very short, but if we're processing like a big video, then this can take a lot of time. So if we use OpenCV, it's very easy. We just use the Gaussian blur function and just pass them the image and you can select the kernel size, which we selected at 15 here. It has to be a positive integer and it has to be even. And then it has to be the, we selected 15 and yeah, basically we get the gray image and then we blur it. And if we, wait, let me go back. If we execute this code, we get the output, right? So it's a bit blurrier and it has removed some noise. You can go back and forth with the images so you can actually see the difference from the other ones. This is the one that is not blurry and then we have the one that is blurry here, right? So now we have a blurry image and now we wanna do the actual Kanye edge detection. So OpenCV has its own implementation of this function and what is that? This is basically the gradient value of a picture, of every pixel. If it's above the lower threshold, it is accepted. If it's below the higher threshold, if it's higher the upper threshold, it is accepted. If it's below the lower threshold, it is rejected. But then if it's between the two thresholds, then it's only accepted if the pixel is connected to one that is above the upper threshold. Some important tips to use this function is that there's a recommended upper lower ratio between these two thresholds that we have to set up and it's either between two one and three one. And there's a lot of trial and error on this. So what I did is I set up the higher threshold first on the image and then I was seeing what was going in and what was going out and then I set up the lower threshold just to get the information that I want on the picture, right? So here's the code that I wrote and it's basically calling the Kanye function from OpenCV, very simple, using the lower threshold and the high threshold that we set up, right? You can play around with the thresholds if you want and you can see the different outputs that you get. We execute this and this is what we get. This is the edge of our blurry image that we had before. Next step, because we have a lot of noise and because we know that this camera is fixed in the car and the road is always in the same position in the image, we can actually select the part that belongs to the road. So this is called regions of interest and it's basically just we include only the part that we know that it's important to us, right? So what we're gonna do here is that we select a part, we make a mask as we did before and then we exclude the rest of the image from the processing, right? So we get all the noise out of it. So you can see, we'll make a polygon. This is, you can further look into the code but basically we have some vertices of the image as you saw before in blue. We have a few vertices, bottom left, top left, bottom right, top right. And once we select that, we make a mask again and then we tell the OpenCV just to cut it and we have only this part. So if we execute this code, let me close this. This doesn't wanna close. Wait. Mm-hmm, wanna close. Sometimes the code doesn't wanna close. All right, this is masks. We got the Kanye edge, Gaussian blur and this is the region of interest and it's not closed. Well, you can probably see it on your, but my code folding is not working now on this function so I can't show you that. You probably view the image like this. So you can see we cut it there, right? So I'm just gonna go back. I'm gonna go back to the cutting. Yeah, here we cut the image and now once we have the image cut it, we're gonna go through the huge transform, right? If this works, wait. No, I don't want it. We're gonna go to the huge transform. There you go. Yeah. So we have the edges and now we want to detect lines, right? So we're gonna use the huge transform which is a basic function that will detect any shape that can be given in a mathematical form. We can have lines in mathematical form so we can detect lines. It uses a voting procedure. It's a bit complicated to explain it here. You can have a look on the tutorial that is linked here by OpenCV. It's actually very good and it will teach you all you need to know about the algorithm, how the whole thing works and you can get more information about it. But up to now we know that we can detect lines using this function that OpenCV has, right? We are gonna use the line version of this and specifically we're gonna use the probabilistic one which doesn't use all the points in the image. It just uses a subset of the points in the image because this function works pretty much as well as the regular one and it will make it faster because we're processing video. It will take a lot of time if we use all the points, right? So there's a few variables that we have to set for this hue transform and we have to set raw which is the distance resolution of this accumulator for the voting in pixels and then theta which is the angle of the resolution of the accumulator in radians. And there's a threshold on how many votes the accumulator needs to consider a point that a point belongs to a line or not. There's a minimum length for a line to be considered a line and there's a max gap between points to be considered in the same lane. These are the values that we have to set up. So if we check here, I have a function that basically draws the lines but this is not very important, it's just to show the results and then I have the call to the hue transform for the lines detection of OpenCV with the actual values of raw, theta, the threshold and the minimum values for the length and the gap between points, right? So what I'm gonna do here is that I'm gonna pass the image that has the region of interest already cut and I will get a few lines and then I'm gonna draw these lines into the image so we can actually see the result. If I execute this, we'll see a bunch of lines going on where we had regions before, right? There's a bit of noise but we'll take care of that later. What happens here is that we have multiple lines detected for each of the lanes in the road and then some of them are only partially recognized. So we have to extrapolate these lines to cover the full length that we want to use, right? So there's a trick here. If we look at the lanes of the road, right? There's this one that has an inclination one way and the other one has a different inclination, right? So for that, we can actually use the trick that the left lane will have a positive slope and the right lane will have a negative slope. So when you have like a lane, a function lane, the lanes with a positive slope are this way and the other ones are the other way, right? In the images in OpenCV, the Y is reversed. So these values actually are the other way but the basic idea is that they have opposite slopes. So we can actually group the lines that we have detected here into left lane lines and right lane lines using the slope. So this is what we do here. This is a bit long but basically you have to get the idea that what we're doing is that we're grouping left lanes and right lanes using the slope as a discriminator and we are making an average of those lanes and we give the longer lanes, we give them a bit of more weight because there's supposed to be better reconstruction of the lane on the road. So this is what this function is doing. I execute the function, so we have it in the Python and now we can actually start drawing the lanes. To draw the lanes, we need to convert these pixels that we already have. We need to convert the slope and intercept that we calculated in the previous. So the slope and intercept that we calculated, it's actually here, let me go back. So the slope and intercept that we calculated here, we have a slope and we have an intercept. That's the function of a lane. So for those of you, I don't have a white but I can white here. For those of you that don't know, this is the function of a line, right? This is the slope, this is the intercept. When we have a line here, the slope will determine, like this is the different and then the cut here, it is determined by the intercept, right? And because the y was negative on the image, that's why we have to do it otherwise. That's why we change the discrimination on the slope. So now that we have these two values, what we do is that we have to give, basically to open CB, we have to give them the first and last point of the, the first and last pixel of the image where we have to, where we want the line to be drawn, right? So we have to calculate these two points using that formula over there. So this is basically how to calculate. This is just the basic maths over that formula so we can actually detect x, x1, x2, y1, y2 which are the start and end in pixel of our length. And we just basically get out of this as a list of these points and we use the function cv line to draw the lines and then we use the add weighted to mix the images all together. You can see, you can see the documentation of these functions clicking in the function down there. This is the functions. This is the one that we'll calculate. So what we're doing is that we're telling to calculate on y1 is equal to the bottom of the image because we know that it will start on the bottom of the image on y and then we wanna draw it up to a certain point which is 0.7 of the image. So it will be roughly around here and then using the formula over there we just calculate the actual line, the x points, right? So the x will go here and the other one will go somewhere. And then we just draw the lines. For this, if you check the color that I'm using here it's actually a different one so we can differentiate the video from the previous one. I think the previous was blue and this one is amusing the green part, right? So we execute this and we get the green lines detected and drawn there, right? So what we do is that we actually use the add weighted function which basically gets two images and you can merge them together. So because we build an image with the zeros we build an image with zeros and we put the lines in the image and then we merge them together. So you can see the image is here and we want like 100% of the image to be there and then we have the lane image and we just add like 90% of that image to the other one. So you can see this a bit of transparency on the lines here. You can check the also the documentation on this through the links that I showed before. And now what we do is that this is called a pipeline, a computer vision pipeline. We use these pipelines a lot in computer vision for different purposes for processing images. So it's basically a set of steps that we execute to process, to have a final outcome. And here's the, I created a class that is taking, takes images and then it processes these images and you can see the actual pipeline here and the process, right? You can see that we do, we select the white, we convert the gray, the white to gray scale. We do some smooth thing. We detect the edges using the canny. Then we select the region using the Roy cutting that we saw before. And then we select the lines using the heat transform and using these lines we average them and then we actually get the real lane lines out of it. So we execute this code so we can have it. And we're ready to process the video. We're now able to process an image. Now we can process a video. For this, we basically use this library that we load before that basically takes a video input and we can actually process each of the images one by one and using our function before, right? So we execute this and you can see that it's doing the same kind of processing that we saw before. For those of you that executed this for the first time on the, I forgot to mention that for those of you that executed this for the first time on the server, it might start downloading some FF MPEC library because the first time that if you don't have it, the library will download the tool because it's needed for the processing of the video. So it will take a bit more time, but it should be quite fast actually compared to the time that the whole server tries to takes to set up, right? So we're almost there and we finally can see the output we're done here. So we can now check the video using the functions that we defined this with the lines in green. So it's the new algorithms that we actually wrote now, right? So going back to numbers, in the roads this is actually 92% of the space is not used. And this actually can be actually solved. It's a problem that can be solved doing automation of cars. They can actually upgrade up to 90% usage of the space and we can probably reduce the amount of traffic jams in cities, right? And going back to numbers, 1.3 million people die in road crosses each year, an average of more than 3,000 deaths per day, and it's the leading cause of death among young people. And as I said, this is like a very big failure of technology. That we have to solve. It is a must for everyone in this field to help solve this problem, right? So because of this, we're coming up with a self-driving car degree, an up-code economy. We're working on this degree that will actually cover all the areas needed for the self-driving car. For the self-driving car industry, we're talking to the experts that actually work now in self-driving cars. We're getting feedback from them. We're getting partnership with them. So this is actually a really practical course. We're going to try to reduce the maths involved as much as we can. It is impossible to remove all the maths, but I think, like I said before, you actually need to know how the heat transform is done. You just need to know that the function is there and what is good, what is bad on these functions. So you can actually use them in the real world. We're trying to partner with these companies so we can actually have real practice on field. And you can actually get real experience out of it. For now, we have a computer vision course going on that you can actually check this website. The information of the syllabus is there. You can see what we're going to teach and what information will be involved in the course. It will be a basic introduction into computer vision for preprocessing, something similar to what we saw here. Basically, preprocessing of images, very basic for Python. For those of you that don't know Python, you can take the Python course, which is actually a good way to get into the programming world, and it's an easy language to learn. So as a first step, you can take Python course and then you can move into this one, which is the first step into the self-driving car industry. So that's pretty much it from my side. Thank you very much. If everyone has any questions, I'm open to it. You can drop me an email. You can check the course over there. Check the GitHub. Check the... It will be up for a long time, so you can no worries for that. And thank you very much. Any questions? Okay. I think we're done for today. Yeah. That's a tricky question. I guess once you go into higher levels, you have to build everything from the ground, right? So computer vision, C++, those are the first steps, right? Into the field. I would say that if you have to do systems, the system integration, for example, you need to understand the whole thing, right? Or the control part. Those are usually the higher level, and so you have to understand what it's underneath. So I would say those are actually the most difficult parts. But, yeah, basically what we're trying to do is we're trying to get the first courses out as soon as we can so we can build from the ground up all the knowledge that is needed to get self-driving cars developers out there. Yeah. Six weeks. Is it one session? Yeah. There's one session per week. I think it's 2.5 hours. Two and a half hours, yeah. So in a course, it's basically, we're teaching a lot of OpenCV, which is very nearly focused on image processing. We're teaching a lot of point cloud library, which is a library that is focused on point clouds, which is basically information that you can get with the lasers that the cars have. I don't know if you've seen the Neutronomy cars around here. These guys, they have a laser on top that goes around. So that's taking the point cloud of the environment. So how to deal with that information, how you can process that. And then we have the last session, it's on machine learning. So you can get a bit into the machine learning because OpenCV has a module on machine learning. So we're going to try to give you basic information about machine learning. Also, we're also working on convolution and network course that takes more computer vision and machine learning altogether for a whole course. But this one is the first step for preprocessing images that you need to know. Because actually, sometimes when you get data that you have to feed into a convolutional network, you have to do some preprocessing. So actually having this knowledge is actually good. How do you go? Well, you actually have to go to... So this is a very easy example. So we only have one hour here. There's not much time to do the whole thing. So when we have this lane detection, it's actually probably not good enough for a car. If there's a turn, it probably won't work properly. If there's lights changes, it won't work properly. So that's where you have to get some neural networks into it. Everything is much more complicated. So if you look at the picture... Let me go back. If you look at the picture of the car, there's this sensor fusion thing that's actually very important because you're not going to use only the information from the camera. You're going to use information from other sensors. So you can have a light sensor and if the light sensor is different, then you're going to have different values on your thresholds. Let's say these thresholds that we have set up manually, they won't work in every setup. So you have to do a lot of tuning for these thresholds or you actually have neural networks learning the good values depending on the different data that you get from the different sensors. So there's actually a lot of work involved and more than even a lot of work, there's a lot of testing involved. That's why you see these guys going around with a car all the time because everything that you do, you have to test it and then you need a lot of data so you can actually train your networks and you can use that information to actually steer your car. So ultimately you will have some control that will get some information and then will steer the car from one way to another. But this is like the basic first step that you can actually do to get some information on where to go or not, right? You can also use the GPS information, right? So if the GPS is telling you, you can have preloaded maps. There's also information that you can use. So this all information you have to take it all together. It's not only the camera image, right? Which of the module here is handling the collision avoidance like hitting another car hitting a woman? That's part of the control part. So taking the information that we have, the control part will actually... So all the modules will come together, give information and then the control at some point will take a decision on where the car should go, should stop. That's the ultimate decision we'll take there, right? Anything else? So self-driving car focus, do you think there is such a huge demand in self-driving car engineers? We... I don't know about the self-driving car industry itself. Computer vision is definitely going to be a huge thing. So basically the vision information is huge already out there and there's no means to actually retrieve all the information in this... So that's a huge field right now. For the self-driving cars, I think it's an industry that is actually at a point where they're not yet there and they reach a point where they actually have to put a lot of work to come out in the industry. But I think it will start growing once it's out there. Here in Singapore, it's actually not bad. There's quite a few groups that are actually doing the self-driving car thing. So yeah, we expect that those fields to grow more and more. So the current class that we only have right now is the computer vision part. So that's the first one. We're working on C++, advanced C++ and CNN, convolutional neural networks. We're trying to do these courses independent. So you don't actually have to take the whole degree. They're independent. In computer vision class, we will teach only computer vision. It's not going to be specific for self-driving cars. It's going to be useful and it's going to be part of this degree, but you can take it and then work on something else. You can go work on Google and do some image recognition. It's not related 100%. Anyone else? Okay. Any questions you can reach me on an email and thank you very much.