 So what is this week about? We will be talking about multi-layer perceptrons, the simplest non-linear neural networks, including of the deep tanked. So what will we do today? We will first talk about a little bit about real neurons and we will actually simulate one. Then we will talk about the need for multi-layer perceptrons if we want to solve interesting prams. We will then talk about approximating functions with artificial neural networks and learn how that's always possible. Then we will start talking somewhat by classification. We will review cross-validation a little while we are there. We'll talk about deep for shallow networks. Then you'll learn about neural tangent kernels, which is, from my perspective, one of the coolest understandable parts of deep learning. And then ultimately, we will solve animal faces using multi-layer perceptrons, giving you, in a way, the first truly interesting deep learning pram that you'll be solving yourself. So let's dive right in. Here, on the right-hand side, we have a neuron. And it's hard to know how we should be thinking about these neurons. No, but like, here's the simplest abstraction of neurons that we could have. We can say the membrane of the neuron, now it has this big membrane, acts like a capacitor. And fast approximation, it does. And then we can say any input, any synapse to the neuron. Synapse is the place where the signal from previous neurons goes to this neuron, acts like a little bit of a battery or more precisely, like a current source. So how should we model this? We can say that the change in voltage on the capacitor is gonna be, and again, of course, this is an approximation to what would be happening in reality, is gonna be the weight of the synapse, if you want the strength of it, the size of it, times the input that we have there. And this is, of course, the change in voltage at that point of time. We assume that we have a discretized set of time points, T. We can alternatively write this as a differential equation, of course. No, and then what happens? In this simple approximation, we can say we start with a membrane voltage of zero. In reality, starting voltage might be more like minus 70 millivolts or something and maximal voltage might be plus 30 millivolts or something. But we start at zero, and at every point of time, we add this voltage change that depends on what the input is. And if the voltage exceeds a threshold, in this case one, we set the voltage to zero. We make it that at that time point, there is a spike. So the output of the neuron is one where otherwise it's zero. And otherwise, nothing happens and we keep integrating. So that's the simplest model that we can have for meaningful spiking neurons. It's called the integrate and fire neuron. And it's very popular in the field. Now, we will also add some leakage current. What that means is if the voltage is very high, it will go towards zero. If the voltage is very low, it will also go towards zero into the opposite direction. More precisely, there's a negative voltage proportional current. In biology, this is universally the case and it varies in strength from neuron to neuron. So, if you simulate such a neuron and you give it some stimulation, how will this neuron behave? Let us see them simulated and let's play a little bit with the parameters.