 So what happens if we make a linear artificial neural network deep? We could say y in this case is a times b times c and so on up to z times x, but that's really not changing anything. Now like I can now rewrite this as v times x where v is the product of the matrices a through z. So making linear artificial neural networks deep makes no difference in a way, but in a way it does because once we make it deep we will definitely be overparameterized. We will have a lot more parameters than we will have data points and if we have an overparameterized system then the parameterization matters for the way how it converges and we'll try and understand this a little bit better today and we will look at deep linear neural networks which seems weird if you even think about it, but first let's think a little bit about abstractions. Now if we talk about linear neural networks it's we can write equations that make this be linear but ultimately it is implemented by binary numbers. Binary numbers are very much not linear. Now like if you look at how microprocessor works which I highly recommend it's great fun then it ultimately is implemented by operations like OR gates or XOR gates and things like that that are very much not linear. So with these two levels we have the level of the mathematical abstraction where we try and approximate a linear neural network and the level of the machine where we're not linear and these two are different and we design an exercise right now that gets you to see a little to which level the two of them are not the same and what's the abstractions? Now if we look at a typical number representation it might know here we have 32-bit floats it has a sign it's a it's a bit that just binaryly says is this a positive or negative number it has an exponent and then it has the number itself. What that means is that if you zoom all the way in it doesn't actually look linear and there's like a breakdown of abstractions is arguably very important it leads to a lot of failure in deep learning and it's actually very interesting. So let us all see that with our own eyes. So now what you'll do is you'll look at the behavior of linear neurons zoom all the way in to see to which level there are non-linearities involved.