 Great. So after seeing the most high part of deep-blowning in the previous week, what are the objectives overall? So the first one is we want you to really understand which PROMs deep-blowning approaches can solve. We want you to be able to take real PROMs, distinguish the parts that can best solve with deep-blowning with the other components, often symbolic and be able to build it. We want you to be able to convert intuitions. You have about the world into a code. We want you to be agile in development and debugging and have an idea on why it doesn't work, if it doesn't work, and mechanisms to make it work once you're there. So let's briefly preview the next cup of week. What you just did in the last week was basically see the overall forest. Now we will dig all the way down. Today and this week we will learn about linear deep-blowning, which you'll be shocked is a surprisingly deep issue. And then we will talk about transfer functions and multi-layer perceptrons and making neural networks deeper in what the effects of that are in week three. We'll talk about optimization in week four and regularization in week five. Now all these components are very microscopic. They're the things that neural networks are made out of. And you need to have real mastery of all of them so that you're able to build really interesting systems. And then over the rest of the course we'll go into the application domains. Now before we get there I want to give you another chance to talk about the future. Meet in the part and discuss what you aim to be learning. And tell us what you hope to get out of this course. We still have time to fine-tune that course so that it's as useful as possible for you guys.