 So, what did we see? Real neurons are somewhat like real units that we use in deep learning. So, this is a transfer function. The output depends non-linearly on the input. Mathematically, a real function is simply f of x is max of 0 and x. What does that mean? We will have a flat region below 0 and a linear region on the right. By the way, the gradient is 0 on the left. I just wanted to remind you there. But now that we have real units, let's look at the exopram, because exopram is the famous example of a really, really simple pram that you can't solve with a linear system. So, what's the exopram? It's a truth table. We can have the continuous version of that, but it basically means if the two inputs are the same, so if x1, x2 are both 0 or they're both 1, it should be 0, the output. If at least one of them is 1, if exactly one of them is 1, the output should be 1. It's an easy function and it should be easy to learn that. So, let's visualize that. What we have is that basically for 0, 0 and 1, 1, we need to get an output of 1 and for the others, we need to have an output of 1. What we can directly see here, there cannot be a linear solution to that. The only solution would be to have a hat in this space. Linearly, that's just not possible. So, what does that mean? Can you draw a line that separates the positive from the negative examples? You can draw a line, but it wouldn't be a straight line. And it's very clear why this is the case. Now, in a way, the ones are in between the zeros. So, it's impossible to solve the exopram with linear learning. And in fact, that realization led to the death of the field of neural networks at some point of time. What you see here is the phrase usage in English of the phrasing neural network. And you saw how it in a way grew up until the mid to late 60s and then it tanked. And that is when people started realizing, oh, this problem can't solve XOR. That was a little bit of a problem because people had been bragging that this kind of linear neural network, that they didn't call like that, that it cannot, that it would talk like humans, be able to do absolutely everything. People did crazy press releases. People were worried about the impending doom of human civilization because neural networks can do it all the much better. And then they realized that's not happening. Suddenly not at that speed. And with that, the field tanked. I just want to make sure you all realized similarity. We are very actively debating the end of human intelligence. And people do big press releases about how worried they are about it. So in this specific case, the problem couldn't solve XOR. For two days deep learning systems, there's other things that they can't solve. And we'll hopefully get the chance to talk about it later in the course. So now what I want you to see is if you take a multi-layer perceptron, you can actually solve the XOR problem. So what will we do? We give you a widget where you can set the weights. It's like a nifty little piece of code that allows you to play with things. We give you that widget and you can click on the lines to set the weights. I want you to try if you can by hand get it to solve the XOR problem.