 Greetings fellow learners, this here is going to be another video on the fundamentals of deep learning and neural networks. Now before we get into loss functions, I have a thought provoking question for you. Is there a skill that you're currently trying to improve? And if so, do you track your progress quantitatively, like with numbers, or more qualitatively, based on intuition and self-reflection? I'd love to hear your thoughts on this so that I can know you a little better. So please comment your response down in the comments below and let's have a discussion. Now this video is going to be divided into three passes where in the first pass we'll discuss a more high-level overview of loss functions and we'll get more and more detailed in the coming passes. This is going to be super fun so let's get to it. A loss function quantifies how well a model performs during the training phase. The loss function is a function and this means that it has some input and an output. The inputs are the prediction from the model as well as the ground truth and the output is some number known as the loss that quantifies how good or bad the prediction was. Now depending on the type of problem this loss function could be anything. For example, if we're dealing with a regression problem, this could be a mean squared loss. If we're dealing with a classification problem, this could be a cross-entropy loss and we could even customize this loss to be anything else. Quiz time! Have you been paying attention? Let's quiz you to find out. For a neural network that determines house price given information about the house an appropriate loss function would be a the mean squared loss, b the cross entropy loss, c either of the above, or d neither of the above. Comment your answer down below and let's have a discussion and at this point if you think I deserve it please do give this video a like because it will help me out a lot. That's gonna do it for quiz one and pass one of the explanation but keep paying attention because I will be back to quiz you. This here is a simple feed forward neural network. We want this network to learn how to take in an image and classify it either as a dog or not dog and this learning is done by the training phase on thousands of image label pairs. Once trained we move on to the inference phase where we pass an unseen image to the network and the network is ideally able to correctly identify whether it is a dog or not a dog. Now let's illustrate the training phase in more detail and highlight details about the loss function. So first you want to pass an image to the network then the network will make a prediction of whether this item or this image is a dog or not a dog. The network will determine this as a probability between zero and one. We compare this to the ground truth to generate a loss and here it's one since it's a dog. And since we deal with a classification problem a cross entropy loss function can be used here. Now this loss is used to compute a gradient that is the change in loss with respect to every network parameter. Now this calculation happens in the later layers and then into the initial layers and so this phase is called back propagation of errors or simply back propagation. For more details on this back propagation process I have a video right here. We then use these gradients computed in the previous step in an optimization algorithm. This will compute the new value of each parameter in the network and it will update it. For more information on this optimization phase I have a video over here too. Now this is just one iteration of the training phase. We repeat this thousands of times and then the training phase is complete because the neural network has learned. During the inference phase we now take an unseen image. The network then ideally correctly gauges whether the input was either a dog or not a dog. Quiz time! It's that time of video again. Have you been paying attention? Let's quiz you to find out. Which of the following statements is true? A. The loss function is used to determine how well the model performs during training. B. The optimizer is used to update weight parameters. C. Back propagation is used to compute the gradients. Or D. During the inference phase we typically pass in images that the model was trained on. Note that for this question multiple options can be correct so comment your answer down below and let's have a discussion. That's going to do it for quiz time for now and pass two of the explanation but keep paying attention because I will be back. In this pass we are going to explicitly answer the question why use a loss function? So from the previous pass we want to train a neural network to take an image and determine if it is a dog or not. To do so we need to update the weight parameters. To do this optimizers make use of equations that look like this. And so in this equation we need to compute some gradient of loss with respect to parameters of the network. But to compute this gradient we need to know the loss at different iterations of the model. And because this loss is computed by a loss function we need a loss function. And hence we need a loss function during the training phase of a neural network. So I hope that's clear. Quiz time. Alright this is going to be a fun one. Why do we need a loss function for neural network training? A. To guide the model towards making accurate predictions. B. To determine the size of the neural network. C. To visualize the data set. Or D. To control the learning rate. Comment your answer down below and let's have a discussion. And at this point again if you think I deserve it please do consider giving this video a like and that will help me a lot. So thank you so much and that's going to do it for quiz time and past three of the explanation but before we go let's generate a summary. A loss function quantifies how well a model performs during the training phase. The loss function is a function that takes the ground truth and model prediction as input and it outputs a loss. This loss ultimately assists in the training of a neural network to solve a specific task. And that's all we have for today but if you want to continue your journey on to understanding loss functions better I highly recommend you check out this supplementary video right over here. It gets into a little math and more fun details. But that's all I have for you today. Thank you all so much for watching and I will see you in the next one. Bye bye.