 Okay. Let's think about where deep learning works really well. We've seen a bunch of examples and some of the limitations which we'll be addressing this week of where it works less well. So first of all for fun, let's look at global investment in the last year or so in deep learning. Where is the money flowing in 2020 in purple compared to 2019 in blue and interestingly the biggest move right now is drugs cancer molecular design drug discovery but there's still a lot of money flowing into autonomous vehicles to education machine translation fraud prevention lots of the classic pieces note this is venture capital we're not talking about Tencent in Amazon and the well-established big players but lots of applications lots of investment in a variety of application areas for example companies like IBM doing antimicrobial discovery combinations of deep generative models things like GANs and electric dynamics so very much focused applied embed the deep learning into a broader context and use it for something that can make money because of course it's companies investing money but thinking sort of more generally what have we seen we've seen really good CNNs for recognizing is this how fish is this furniture is this food object recognition works great what we haven't looked at much as things like scene recognition is this a kitchen what's in it is this an office find the computer and it find the relations is there someone using the computer so a lot of work happening right now going beyond objects to sets of objects and scenes and what's happening we've seen how to do machine translation mostly it looks pretty cool the boy hit the ball with the bat oh except yeah a lot of you speak chinese there's something funny about that beyond who pardon my terrible pronunciation there are different kinds of bats there are baseball bats and there are winged mammalian mammal bats animals google still sometimes doesn't quite get the right context in spite of using beautiful transformer models just like we used but mostly it works really well and that's the least of its problems we saw examples before where if you're translating for example from hungarian which does not have genders required to english you get things like the neutral someone is beautiful someone is clever she is beautiful he is clever so we've seen these problems of built-in bias based on statistics in the training language these are gradually getting better each year google improves these a bit but they're still by not by no means perfect what we also see when we look at scene captioning take an image import it with a cnn to a deep representation then use a lstm for a decoder or a transformer for a decoder what you see is that often it does sort of good it's a group of people standing next to a man in a suit and a tie awesome it's got what's there but it's missing what's going on it doesn't really understand the scene it knows statistically what sort of labels or things are there what's happening is this guy getting fitted for a suit what is the action so the labeling we get tends to be very superficial missing the idea the concept of what's happening if you look at where deep learning works one way of caricaturing it is the one-second rule that most of deep learning currently works in things that humans could do in a few seconds recognize is this a cat or dog which of my friends is this caption an image translate a sentence pick where did i move the joystick right now in the video game mostly these are short perceptual quick reactions they're not deep understanding of causality or the world of what's going on they're not extended anything right quick reactive pieces we haven't quite got the deep place and one thing i like to explore this week is how to get computers closer to doing things that humans are good at so here's a typical sort of of intelligence test iq test you see the top thing there are two yellow stars and that goes to three yellow stars if we have four blue dots what do we expect to see now you've probably never seen a task like this okay sort of like them so you've taken sat scores or some sort of things maybe even a relevant ravens iq test these look like iq tests but you're quite good at zero shot learning recognizing that this is similar in some way figuring out what objects are if you know what matters or doesn't matter does the star shape matter does the blue matter does the number oh where's the number all you get is pixels so somehow you can see that yeah probably the right answer is a five blue dots this notion that you have learned very abstractly that they're objects you can count them you can have the process of adding one to it note that our current ai systems our current deep learning systems are crappy at this and a cool thing we'll explore today is how they might get better at it