 So, today we will use the magic of autograt to build neural networks. Well, very simple neural networks that solve simple problems. And we will just work on linear functions. Nonlinear loss is okay, but in the first part of the definition of deep learning, the function that we are looking at is a linear function. And week three we will actually learn that a lot of nonlinear learning is just linear learning and the keyword there is neural tangent kernel. But let's see what we can learn about linear problems. Now you'll be like linear regression. Isn't that what we do in the first time that we're studying? Isn't linear regression just linear algebra? Can we step over this? Why is this even interesting? So the first one is it turns out that the dynamics of linear systems can be very interesting. Now like what other dynamics certain dimensions will exponentially go to infinity, some will exponentially go to zero. We can even have some that in a way go around circles. Now, deep learning systems that are nonlinear share a lot of properties with their linear breadth and understanding the dynamics of linear learning will really help you have intuition to see what's going right or wrong when you then train on linear systems. So from my perspective, thinking about the linear case is really the one thing that grounds me in intuition. So I'm very excited about today. But first let's look at some neurons. Now I'm a neuroscientist. I can't not talk about neurons. What do we have when we look at neurons? We have neurons have inputs. They're called dendrites. This is where the information comes in, comes in as spike trains. They have integration, which is seen as the body of the cell where everything gets to be integrated. Arguably dendrites do a lot of integration are very nonlinear and so forth. But that's the topic of current research. And neurons have an output, the axon. Now every neuron if you then look in, looks like in fast auto approximation, it's doing linear integration. So what I'm talking here, we have the change in voltage, the u over dt times the membrane time constant is minus the difference between the current voltage and the resting voltage. So they tend to go back to where they started plus r times i. So this last term r times i is where the information comes in. And look what do we have here? We have a linear integration. So in a sense everything looks to be linear. It's not entirely true. We'll see. So let's look at neural responses to inputs. What we do here is we take a cell that we have in a dish and we take an electrode and we stick it into the electrode. And what we do is for two seconds, we put voltage into that. And then what we get is we get responses out of the cell. It's what's called a spike train. So we can see how at first it spikes very quickly and then it spikes a little less quickly. That is what's called adaptation. And then we can see, well, what happens if we put more or less current into the cell? And here you see examples from two cell types. What you see on the x-axis is the current that we put in. The further we are to the right, the stronger we drive the cell. On the y-axis you have the spike frequency, which is how many spikes do I get per second. You see roughly the range here. When we stimulate cells, their activity might go up to 50 hertz or so. Keep in mind for a reference that what we usually have, say in cortex, is around one spike every second. So on average, neurons aren't very active. And what you see for both these cell types is in fast-order approximation, there's a linear dependency of the current that I put in, which is roughly the number of spikes that I put in and the firing rate that I get out on the y-axis. So in that sense, things look reasonably linear. And now I want you to think a little bit about neuroscience. I want you to think about one issue, how linear are neurons really, because I didn't tell you the whole story about neurons. And the second one is I want you to discuss a little bit how you think neuroscience can inform deep learning. Please discuss this with your part.