 Okay, great. Welcome to the deep learning class CES-522. And this course is brought to you by my friend Lyle Anga and myself. And this video is primarily to introduce ourselves. Lyle, who are you? Hi, I'm Lyle. I'm a professor of computer science. I teach machine learning, have for many, many years, decades. And I'm really excited to be joining Conrad here for the first time of appearing together, teaching deep learning. Yeah. This is deep learning is one of my favorite topics. I've been thinking about related ideas for more than 20 years. And so in that sense, I'm like super excited to be teaching it. And I'm also particularly excited because Lyle and me, we're like very different from a background. Like Lyle comes from a machine learning background. I come from a neuroscience background. Well, oddly enough, I come as I was trained as a chemical engineer. Oddly enough, modeling the real world and I went some digressions and did genomics and did economics and most I do psychology of language these days. But yeah, so machine learning very much in the applied side. I worked at Google doing big scale machine learning, a company that's now been taken over by deep learning. And so it is fun to bring things that are complementary. Yeah. I want you to say kind of a little bit about your background, how you ended up in deep learning. I think it's sort of fun. Yeah. So I did a PhD in neuroscience and I wanted to take neural nets. Yeah. The original neural network back back then neural network was a really bad word because in the late nineties, there was nothing as uncool as artificial neural networks because they hadn't yet discovered that we should call it deep learning, making it sound much better. And during my PhD, I was interested in how we could build predictive models of neurons, which ultimately ended up, I ended up working on things that were very much like fitting a deep neural network to well, a real neuron. And that idea has like never quite, I never quite gave up on that idea. Which is funny. We really are opposite on that thing and that I spent my whole career trying to build in external knowledge into machine learning. How do you build in a knowledge of syntax? How do you build in information about the structure of how a DNA sequence is organized into promoter regions and really very much from the engineering trying to build in some human knowledge and learn the rest of it. But I love it. I love teaching together because I love the contrasting views of it. But in that sense, like, of course, we share something and like if you want to build good models of brains, you also have to build in prior knowledge about it. You have to build in prior knowledge of anything to build of good knowledge of anything, including brains. Yeah, and deep learning is so complicated. Yeah, and deep learning is all about building in the right kind of prior knowledge and that's what I hope that the students will get out of it that they'll be able to like really use it and build in the right prior knowledge and like make it work for them. Cool. So maybe we should say a word or two about why this course is different. I'm super excited about it because this I think is finally to teach a course the way it should be taught as opposed to the way it's been taught because in the Middle Ages, people had chalkboard and used to write in the chalkboard and people would copy it down without the famous, you know, the bad joke from the mind of the professor to the mind. No, no, from the chalkboard to the notes of the students without passing through the mind of the professor or the mind of the student. So our hope is to actually get into someone's brain, but but neural match sounded really cool. So tell us about neural match and why it's what it is, why it's relevant to this course. Yeah, so so last year I ran that deep learning course without you, unfortunately, and I ran it in the traditional way of running these courses and I wasn't quite happy with everything that was going on. And then in summer, something happened, namely neural match that totally changed my outlook on teaching. So when the pandemic had, I basically get together with a lot of other professors trying to build really, really good data science curriculum for neuroscientists. And what we did is we very strongly went in on this inverted classroom code fast way of teaching things. So the way it ended up being as we always had these short lectures just a few minutes, followed by the students doing things. And by you immediately doing things when you taught a concept, this concept becomes meaningful for you. And that's why kind of I felt it was just so powerful the way people were learning from that. And it it worked extremely well despite the fact that we taught thousands of people at the same time. And I think with that, I saw that like this value of teaching with this code fast immediately apply the things that you learn. And that's why I think we need to design very different causes. And so my big help for this deep learning causes that this is going to be this new kind of a cost that really gives you the skills to do things and really insights that stay with you. And that's my hope too. And having taught cognitive psychology for years, it really is the case you learn better by doing things, trying things, figuring things out, talking to people, asking questions, thinking about it, answering questions. Then you do by listening to a couple of guys talk on and on. And so maybe with that, we'll wrap this up. But welcome to deep learning. We're super excited to have you here and to be part of our grand experiment. It should be a great semester. Welcome. See you all soon. Yeah, I very much look forward to the cause. I also hope to be collaborating with all of you on making the cause better. And lastly, the one thing that's really important to me is in lots of ways in deep learning, there's like just a little bit of magic in it. And I hope that you'll get to see it.