 In this video, we're going to learn how to build our own self-driving AI on the Raspberry Pi in under 100 lines of code. But before we begin, let's take a look at how the AI will actually perform. Now, there are quite a few approaches to actually building this self-driving AI. The first is a classical robotics approach. In this approach, we can use techniques such as lane filtering, color and slope gradients, masking, perspective shifts and other techniques to essentially figure out the curve of the lane based on basic computer vision and determine which direction the AI should go. Although this method is pretty robust, it doesn't provide the AI an opportunity to learn and improve over time. The second approach is called behavior cloning. Now, as the name suggests, behavior cloning is a technique where the human performs the action and then the AI learns how to perform the action. In this case, the human will drive the robot around and the AI will normally learn how to drive the robot but also to abstractify features and try to understand why the human was driving the robot the way they were. This ability to abstractify is actually what makes this AI so powerful and will allow the AI to drive in situations it's never seen before. Based on this information, I choose the behavior cloning model for this AI. Now to actually train the AI, I use something called the blink app which allowed me to remotely steer the robot as it learned. The blink app also let me toggle between the learning and the performing modes. The blink code setup is very straightforward. Make sure you copy the correct authentication code from your phone. Make sure you have the correct virtual pins for your event handlers. And lastly, make sure to include the blink.run function in your forever loop to ensure that the event handlers can update properly. Now let's dive into the actual heart of the AI. This AI is actually powered by convolutional neural network which allows this AI to see. Here's the code for the neural net. It has three pairs of con 2D and max full layers followed by three fully connected layers. The last fully connected layer does not have an activation function since we're trying to predict the steering angle. When the AI is in learning mode, the AI will actually take an input image from the webcam and it will process it through this convolutional neural network into a steering angle. Now this predicted steering angle is actually compared to the human's input or what the human steering angle is. And based on the difference, the AI learns using the atom optimizer. We can see these steps in the code. In the learning mode, the AI will get the input image which is also known as the state. It will also get the human's actions using the observe action function. If the action is valid, it will learn from the action and then see if the weights. Let's take a deeper look at the functions. Here's the get state function. This code collects an input image from the webcam and here's the learn function. It learns based on the human's actions using the dot fit method. When the AI performs autonomously, this process is mostly the same. Again, the AI will take a stream of webcam images, it will make a prediction on where to go and it will just act on it rather than looking at the difference between the human input. Jumping to the code for the act function, we can see how the AI makes a prediction using the dot predict method. Now let's dive into the actual hardware of the robot. The robot is powered by a Raspberry Pi and two brushed DC motors with encoders. The Raspberry Pi and motors get powered via a LiPo battery whose voltage is stepped down for the Raspberry Pi. The Raspberry Pi gets its camera feed from a simple USB webcam. Let's end this video by talking about some of the changes you can make to improve this AI. The first biggest change we can make is giving this AI memory. By giving this AI memory using techniques such as experience replay or memory buffers, this AI can be more sample efficient and also train on batches which allows for faster training and learning. Giving the AI memory will also enable us to do data generation. Data generation involves techniques such as flipping the image so you have both steering angles as an input or perhaps cropping the image a little bit, changing the lighting by making the image brighter or darker and then learning on these new artificially generated images. Data generation will allow this AI to learn faster, be more sample efficient and also abstractify its learning to better environments. The third improvement we can make to this AI is borrowing some techniques from classical robotics like lane detection and learning based on a lane detection using behavior cloning. I've already implemented some of these improvements such as memory in a new improved version of the code. The original 100 line code and this improved version are both available on GitHub for download. Thanks for watching.