 First off, thanks for being here with us at the Penn Deep Learning Academy. What's our objective? We want to teach you what you want to learn. Every aspect of the content should be truly useful, while also acknowledging that we will have a wide heterogeneity of students. Some of you will already know quite a bit about deep learning, while others are almost completely new. This is the very first version of that, so it's really important for us to get your feedback. So please, tell us every time you finish a co-lab how it should have been better. Tell me how my videos should have been better. I'm sure I will say a lot of stupid things during this introduction. And feel free, if anything isn't right, isn't as good as you'd like it to be, contact me about it. We really want to work on making this course be maximally useful. So why is this course special? In lots of ways, it contains the same content as we had during last year's CAS 522. It has, but what is new here is that we now have a heavy focus on learning by doing flipped classroom code-fast philosophy. What does that mean? I believe that if I just tell you something and keep going for an hour and a half, you will lose most of the things that I tell you, even if you then re-watch it afterwards. But I believe if I tell you something for five minutes, and then you spend 10 minutes doing it, you will probably never forget it and it will really be part of your tool chest. So that's why the idea is I want to explain you things, have you coded up, solve a problem with that, and that is the way how I believe learning is good. The other thing is a heavy focus on teamwork. I believe everything is better if we work productively in groups. So what does that mean? We believe in learning in teams. We want you to really get to know your TA and your part. Part is just our name for the group in which you learn. Groups will be size about 10 to 15 somewhere in that range, and as a group, you will be much stronger than you'd be without the group. So what does that mean? You need to learn to be comfortable about them. You need to be to learn to be comfortable in your part about the things that you do not know. Everyone will have a lot of topics that they will not know that they really need to learn, and so make sure you open up in that sense that you share which things you don't yet understand, and on the other side be helpful. Help everyone in your team learn as productive as they could be. Your group can be a great source of strength for you, and I believe this holding one another accountable is also one of the factors that just makes learning in teams much more productive. So I get to know some parts in the pre-component, and it's really interesting to see how their makeup is. So some people are great at math. Other people might be great at coding, yet more people might be great at thinking about the ethical society dimension of what they're doing. In a good part, everyone's having a great time and helps one another learn, and it's really there for one another. This causes heart. It's going to be much more enjoyable if you do it as a group. Now, I probably don't need to convince you that deep learning matters. After all, this is a deep learning cause, and you signed up to be students in this cause, but I'll do it a little bit nonetheless. Now, so why is deep learning useful? Well, it's state-of-the-art in lots of domains of machine learning. A lot of people would say most domains of machine learning. It's revolutionizing computer vision. Now, we can now have cameras that in real time detect all kinds of real-world objects, and people, and the way we move through space, the way we move our limbs. It has a huge influence on lots of aspects of computer vision. It has revolutionized natural language processing. Now, if you interact with Google or other search engine, recommend us, and so forth, it's really meaningful what it does now, and that's enabled by deep learning and really large data sets. It's state-of-the-art in time-series processing. For example, when it comes to processing speech. In fact, you will in all likelihood, at the bottom of the slides here, have automatic captions. They are being made by the deep learning system. And then, of course, it's state-of-the-art in reinforcement learning. And in fact, the lectures this week will be about a specific reinforcement learning system, namely AlphaZera. So, at the same time, deep learning is also a sub-discipline of machine learning. So, lots of concepts that matter for deep learning are taught in machine learning courses. It's very important, therefore, to take a machine learning course fast. You will get much more of the deep learning course if you really understand the concepts from machine learning. And just to be clear, we didn't do careful prerequisites testing for you. If you lack machine learning background, then it will be hard for you to understand certain aspects of what we teach here. It is okay if you like that, if you think you can catch up with it. Just please don't complain that these things are unknown to you. They are part of the prerequisites. So, what do we get from machine learning? The notion of regularization, how regularization works, various ideas of doing regularization. We have from deep learning the basic notion of establishing success. How do we know that we are good while we do things like cross-validation? We have from machine learning the ideas of how we take machine learning systems and have them interface to reality, like notions like predictions. How can we build predictors into recommender systems or something? There are many concepts of statistics that we inherit from machine learning and many general concepts. And that's why it's a prerequisite for this course. So, the rest of this course will assume that you have taken the meaningful introduction to machine learning costs. So, what are my expectations here? I expect all of you to be able to use deep learning productively after this course. You will need more courses to become a deep learning researcher. No, we are not preparing you here to like immediately get out of this course and start working for Google developing new recommender systems with deep learning or something. This is not what we are trying to do. This is not a course to make you into deep learning researcher. It is a course that should put you to the level where you can have fun with it, where you can implement it, where you can use it, and where it becomes a tool for you. I also hope that with this course we will help you see through the hype. A lot of people warn us, watch out, deep learning is going to take over society and the world and will soon all be in trouble for that. Probably not and I want you to see why. I also expect you to help us make the course better. I know as I said it before, I will say it a lot more times. Please help us make the course better. Now, I want to say one more thing. The pandemic is of course difficult for all of us. UPEN has resources. They will all be listed in the collabs. Please use them. Those resources that UPEN have make things better and it's really difficult, suddenly for me and I presume for a lot of you as well, to be in that situation. The pandemic is also one of the reasons why I'm so excited about this team-based format. You will be in a group of 10 to 15 people walking tightly together and therefore you at least meet a certain number of people very regularly, which is five hours per week. Now, let's briefly talk about the curriculum. Week one, this week, is the crazy week. For a lot of you, this will be like a crazy wildwind tour. What we will do, we will take one of the peak achievements of deep learning, which is AlphaZero, this program that can play better chess and backgammon and whatever else you have it, than any human and we will break it down into the pieces. The idea is we want you to see the forest before seeing the trees. We want to see it to have you see how the overall package looks like once it's all done. Then afterwards, we will spend weeks two to five on the nuts and bolts, linear transfers, how we have non-linear transfer functions, cost functions, optimizers, regularization, all these boring components, the screws and metal pieces, the deep learning systems are made out of. Then we will spend weeks six to eight on computer vision. We will talk about confnats, transfer learning, generators, things like that. Then we will spend the next two weeks on natural language processing, text, recurrent neural networks, all the things, all the cool things that we can do with natural language processing. And then we'll spend two weeks on reinforcement learning, deep learning systems that in a way interact with the world, or at least a very simplified world. And then in week 13, we'll try to recapitulate what we learned. And for the rest of the time, you'll do projects. And in these projects, you will build a deep learning system that's really cool. So you could say, as I told you, every week focuses on one of the topics. Would it make sense to like skip the weeks that are not so close to your topic, or at least kind of like not tune so much into it? The short answer is just no, don't do it. In more detail, deep learning works by putting together lots of little tricks. We need all of them to be productive. If we even miss the content of any day or two in this, in this course, you will miss crucial components that if you had it would make you productive as a deep learning scientist. So it's, and also without the early lessons, you will miss crucial details in the later ones. So now it's time for you. And take your time right now. Go back to your part and introduce yourself, who you are, why you're here, what you're interested in, discuss with your part what you're most hoping to get out of this course. Which week do you think will be your favorite? And also share the discussions that you're having with us if you'd like to.