 I'm a data scientist at the Cedar Centre and I'm going to talk today about deep learning. So a number of interesting application areas and just a brief overview of deep learning itself and also where Cedar is working in this field. So firstly there's a huge buzz at the moment about deep learning. It's all over the media and you might see it as well in kind of technical type publications and technical technology media. You can also see it in popular media as well. So there's a lot of headlines about AI and deep learning and the machines taken over the world, that humans are doomed. I mean I think even my grandmother probably knows what AI is at this stage. But as well as these kind of shock headlines that you get in the papers such as the sun and the telegraph and so on, there's also a lot of more practical applications and really interesting new problems and new areas where deep learning is really providing huge innovations and huge leaps in the state of the art of what machines can do and how they can help us. So I suppose first just a brief introduction what we mean by deep learning. So deep learning is a name given to related machine learning techniques that are based around artificial neural networks and what artificial neural networks are, are an architecture for a machine type of machine learning algorithm that's loosely braced around how human and animal brains work. Now we don't really know in a deep understanding of how brains actually work but at least some of the small components and some of the small parts of those, there's a loose analogy to deep learning and artificial neural networks. And why is this popular these days? Well, recent improvements in computer processing power and also the volume of available data over the recent years have really led to rapid advances in this field. And we can see this graph here which is a Google Trends graph. So this is showing the volume of searches for the terms deep learning. And you can really see that they're quite flat up till around. It's hard to see there but around 2012 and then since then they've really kind of skyrocketed as there's more interest in this area and more people, more practitioners, more people working in this, more tools available. So what deep learning really is, it's a deep learning architecture is multiple layers of densely connected artificial neurons. So what each of these neurons are, they are just individual very small quite simple processing components really that take very simple output and produce a corresponding or an input and produce a corresponding output. But when you have millions of these neurons connected together and connected in very dense complicated ways, they produce quite a lot of quite sophisticated behaviors that are really built of these quite simple components. And usually these architectures are organized into a number of different layers, hence this analogy for these deep layers and deep learning. What we have in these where these networks are able to learn quite complex concepts. So in this diagram here we see sort of at the layers near the input layer, we have this network for example that's learning to recognize faces from images. And at the initial layers we have these very kind of low level features being recognized of faces, they're not really something that we'd recognize. But as we move deeper and deeper into the network, more complex and more abstract representations of what a face is, is being learned in the network. And there's many different kind of architectures, convolutional neural networks are used a lot for image analytics, image analysis, RNNs for kind of time series data. So we have a lot of these different architectures and these are also combined in many ways for specific applications. So that's a brief overview of deep learning. But what can I actually do, what's the interesting bit about them? So one of the main application areas which really was where they've come to the fore in recent years is in image recognition. So this is recognizing objects in images, classifying images, analyzing in images. And we can see a lot of applications in this area for example, image search, so a lot of people use Google photos, for example, for storing their photos they take on their phone and their devices. And now you can search on that. So if you search your photo library for, you can type in photos of the beach and it will show you any photos that you have in your collection of the beach. Now you never labeled or annotated those images with that. But there's a deep learning technology that Google has behind the scenes to actually understand that these photos that you took are of this concept, the beach. And there's many applications in self-driving cars, in security, medical diagnosis, and robotics in this area. Another recent innovation that deep learning is used for is with Google's DeepMind AlphaGo. So Go is an ancient Chinese game. It's a few thousand years old, which it's very complex. So you might remember back in 1997, Deep Blue, IBM's Deep Blue beat the chess human champion, Kasparov. However, Go is a much more complex game with many more possible game states. And at the time, many people were saying, well, it's going to be many, many years until a computer can beat the human expert in Go. Well, that happened in May of this year, where AlphaGo beat the current world champion. And what's interesting with this as well is AlphaGo has also added many new moves and playing styles to the game. So remember, this game has been studied for over two and a half thousand years. But human players are now learning from the moves that AlphaGo has brought to this. And when we think of creativity and machines, it's something to think about here that this AlphaGo has really created some of these new styles. And what does that mean for creativity and machines? Another very interesting application is in a number of grand challenges on mitosis detection. So this essentially is learning to spot cancer cells in microscope slides of close-ups of cells. Now, this is a very hard problem. Human experts generally train for many, many years to be able to do this. And it generally takes a consensus of experts. So any single expert in this field might spot some of those and make other mistakes. So generally, a number of experts have to work together to get higher accuracy. However, a deep learning-based solution won this particular challenge a few years ago. And what's interesting is no one on the winning team had any specialist domain knowledge, knew anything about the area about mitosis detection. They certainly weren't trained experts who'd spent years and years studying to do this particular task. And that same team has produced state-of-the-art deep learning solutions in many other problem domains. So they don't know, the team don't know about the particular area that they are applying these algorithms to. They don't have that deep knowledge, but the algorithms have the ability to learn that domain knowledge themselves once they're provided with enough data. And this concept, I suppose, is represented in this diagram. So on the top here, we see more traditional machine learning approaches. And for example, this might be a task such as identifying the image of a car in a picture, for example. And typically, you need a human in the loop here, a domain expert in image analysis to pick out particular features and to find out features which the machine learning algorithms can then exploit to build into the classifier to say, okay, it's a car or not a car in an automated way. So there's a human expert that needs to understand the domain and the task and what needs to be done. However, with deep learning, the learning networks themselves can actually do this feature extraction part as well as the classification. So given enough data, the network can learn what's important about that data as well as then inferring the patterns to do that classification. So this is a kind of very important step. And it's something that I think is one of the key things of deep learning. And what we call this kind of data, that's really hard to understand by humans as opaque data. So we describe this as data that's not easily interpretable by humans. It's very unstructured, it could be in binary or raw formats. And it's very large volume. So for domain experts, for people to understand this, it's highly complex and it's very difficult. This kind of data is very common as well. But because it requires traditionally this expertise for traditional machine learning algorithms, it's very difficult to apply those kind of techniques too. And we see some examples such as raw sensor data. The sensors all around us, they're collecting this very raw data. Now, often that has to be rolled up into more high level interpretations for humans to understand. But that raw data, there's a lot of it and it's very complex. There's a lot of physical or biological process data, so think of raw audio, radio spectrum data from telescopes or other measurements, physiological data from our measurements of our own bodies and so on. And also machine interpretable data, so compiled machine code. And what this data, it's all very difficult for humans to understand and therefore for humans to understand and pull out features to apply traditional machine learning methods, whereas deep learning gives us an advantage that maybe we can start applying these to this kind of data a lot more. So now I'm gonna introduce a colleague, Adita, who's gonna talk.