 Thank you very much. So I'm going to talk to you today about the context of machine learning and how we're using it to solve interesting problems in the world. And I'll focus a bit on some of the ways that I think it could help in the field of energy. Caviar, I'm not in the energy field myself. So first, there's tremendous interest in machine learning. You've probably seen news headlines and the scientific innovations that have been developed using machine learning to improve lots and lots of different fields in the world. And the machine learning research community is growing incredibly fast, like more than doubling every number of papers being contributed every two years. So this is faster in research contributions than the growth rate we got accustomed to having in computational power via Moore's law for many years. So we're actually seeing tremendous number of people enter this field and try lots of things and create new ideas. One of the techniques that has really been successful over the last decade is actually not that new an idea, but is really this notion of creating artificial neural networks, which are these kind of deeply layered abstract things that can learn from very raw forms of data. So this is kind of a cartoonish representation of a simple neural network that can take the raw pixels of an image and learn to categorize, in this case, a cat or a dog. But it could be 20,000 different categories. And the way this works is you just expose the system to examples of the patterns you want to learn. This is a cat, cat, cat, dog, dog, cat, dog. And for every example, you make a prediction. And if you get that prediction correct, you don't do anything. If you get it wrong, you can make little adjustments throughout the model to make it more likely that the next time you see an example, this example, or an example like it, you'll be more likely to get the correct answer. And so you just can kind of iterate. And if you have enough training data, you can actually make these systems do very powerful things. And it's not just images. For example, you can take an image and categorize, not just a cat or dog, but two huge numbers of different categories, 20,000 or 100,000 distinct categories, actually more fine-grained recognition than a human would be able to do. If you just look at that, I would say monkey. I'm not a monkey expert. But the system with the right training data can actually distinguish dozens of species of monkeys. You can take in audio data, one in the waveform of what is being said, and actually learn to output a transcript of what is being said. This is the speech recognition problem that computer scientists have worked on for many, many decades. And this machine learning-based approach to completely learning in an end-to-end manner by taking in audio data and using as training data a transcript of what was said, you can learn to completely train a really high-quality speech recognition system. You can learn to take in language, in one language, a sentence in one language, one word at a time, hello, how are you, and produce the corresponding output in another language purely by observing English and French sentence pairs, for example. Bonjour, commenter, et vous. You can even take the output as input and image and do something more complex than just classify it. You can actually write a complete simple sentence that describes that scene, a blue and yellow train traveling down the tracks. And if you'd asked me a few years ago, would computers be able to do this anytime soon, I would have said, I'm probably not that soon. But actually in the last few years, we've actually been able to make advances like this throughout the field. So that's pretty exciting. Those are basic capabilities. And the rest of the talk, I think the really broad implications are that machine learning is actually going to be able to tackle not just computer science problems, but many of the most important and interesting problems in the world, some of which are related to energy. So within our research group, we actually have work going on in all these red highlighted areas. And then I'm going to talk about the four that are in bold phase. So first, one of them is restore and improve urban infrastructure. And clearly, we're on the cusp of having autonomous vehicles that can actually operate in real cities. Our Waymo subsidiary at Alphabet has actually been running trials in Arizona earlier this year with people in the backseat and no safety drivers in the front seat driving around Phoenix, Arizona. So this is not some distant far off dream. This is actually going to transform our transportation system in a way that could make it much more energy efficient. Many of these grand challenge problems rely on better understanding of chemical or material properties. So I'm going to describe something that's more in the context of drug discovery, but you can imagine that understanding chemical properties might be better useful for developing better solar energy materials, things like that. So if you look at what a quantum chemist wants to do sometimes, they want to take a molecule and then understand some properties about that molecule. Is it toxic? Does it bind with this kind of protein or that? What are its quantum properties? And the typical way that they do this in computational chemistry is they have a fairly expensive computational simulator, in this case called a density functional theory simulator, that can take in a description of a molecule as input and then in a very fine grain time step oriented way, eventually simulate enough about that molecule's properties to give you the answers you want. So it turns out in this particular problem and also in many other fields of science, we're starting to see the ability to use that simulator as a teacher for a machine learning based model. And so you can essentially use examples fed through the simulator and use that as training data for a model that is trying to approximate what the simulator would do. And that turns out to be remarkably successful in that you can't actually distinguish the accuracy of the original simulator from the machine learning based model. It actually learns to approximate it well enough that you can't distinguish them in terms of accuracy, but all of a sudden that system is 300,000 times faster. And that really is a fundamentally different kind of tool. If your tools get 300,000 times faster, you do science in a completely different way. You can imagine screening 100 million molecules that might be of interest while you go make coffee and you come back and you find 10,000 of them that are interesting in some way. OK, so one area that we think machine learning can really help is in improving energy efficiency. And so Google operates many large data centers which consume significant amounts of power because they have fairly computationally intensive computing equipment in them. And so that's a picture of the inside of one of our data centers. This is kind of the cooling infrastructure for that data center, which is pretty complicated. And it has a lot of knobs, not physical knobs that someone goes and turns, but you get the idea. There's actually a bunch of settings for that cooling equipment that need to be set in order to cool the system, cool the entire building appropriately so that you operate within safe margins. And it turns out, until recently, we were using sort of human intuition about how to set those knobs so that it would operate at the right temperature and so on. But it turns out that's a machine learning problem, right? You can imagine a machine learning algorithm fiddling with those knobs so that it continues to operate in a safe range, but tuning the system so that it actually operates in that safe range in a way that is more energy efficient. And so that is actually what our deep mind colleagues in collaboration with our Google data center operations team did. And so basically, this is the energy used for the air conditioning and cooling system in the data center. When you turn, when you start the machine learning control system is off, this is a demonstration where you turn it on and then you turn it back off so you can see the effects. And you see a pretty dramatic drop in energy usage when the machine learning system is in control of this. And perhaps more importantly, as the system gets more data, so as it operates over a period of a year or so, it now has seen the seasonal patterns and the weather patterns and so on and can actually get a better sense of how to make the system even more energy efficient. And so you see the more data shown on the right hand axis, we have the better the improvement in energy efficiency of the overall system. And I think this is not just applicable to data centers but to all sort of building, cooling, heating system kind of things, which is a pretty significant component of energy usage. We're also doing kind of more basic research in our research organization in, for example, how do we actually use machine learning to help contribute to development of fusion reactors? So we've been working with Tri Alpha Energy, which is a company that's trying to build fusion reactors. They've built sort of four generations of the system. This is their latest one called Norman. And the essential problem here is you want to get plasma into a stable configuration so that it continues to generate energy. And this thing has thousands of parameters about how you can set this and that and how hot is, how do you particularly inject how much plasma, what ionization voltage should it use, lots and lots of other settings. And perhaps more worrisome, depending on how you set those things you might really damage your single machine that you have that costs $50 million or something. And so you know that the green star is a safe setting but you'd like to sort of explore the various settings that might be safe that are interesting and might seem to be generating more energy or better properties of the reaction than the area of the green star. And so like you don't want to destroy your machine. And so this safe exploration is kind of what you want to do and you can do that with a human in the loop. So you can generate a set of parameters the humans, physicists who are experts in this machine can sort of inspect the experimental parameters you want to run, validate that this seems okay to try and then go ahead and do that. And you can actually then find interesting things that maybe don't generate good results but don't harm the machine and so on. You kind of walk along all these parameters and avoids unsafe areas which is the most important thing. And in particular in an earlier publication on the previous iteration of the system they were actually able to use this approach to find an interesting unexpected rising temperature regime in this setting. This is still very early work but it seems like a nice compliment of machine learning applied to energy problem. Okay, one of the final things I want to talk about is engineer the tools of scientific discovery which was one of the grand challenges that was thrown in. I think it was a bit more of a catch all kind of thing. But it's pretty clear that if machine learning is going to be an important component of making progress on a lot of these challenges that we want to design machine learning tools that tools that allow us to express machine learning systems that can apply machine learning to various kinds of problems. Our group has actually produced a system called TensorFlow that we open sourced at the end of 2015 and is now a quite popular standard for doing lots and lots of various machine learning applications both in the research context and also in applying it to different problems in the world. This is a graph of TensorFlow's sort of interest in TensorFlow compared to a bunch of other open source machine learning packages. So it's really gotten a community externally as we open sourced it. We now have a lot of people contributing to this system and people are using it for all kinds of things including various kinds of energy related uses. Lots of fun uses too. So there's a company in the Netherlands that builds fitness sensors for cows and they analyze the data on the cows with TensorFlow to detect, is this cow feeling a little sick or something like that. The other point I'll make about energy and computation is these machine learning algorithms are really different than the traditional kinds of computer software we want to run on computers. In particular, they have two really nice properties. So all the machine learning models I've sort of described to you have these two properties which is that they are able to deal with very reduced precision. So you can carry out all the computations in the system to just one decimal digit of precision and that's just fine with the algorithms. So they're sort of stable even with that reduced precision. And the other thing is they're all made up of computational sort of compositions of a handful of very simple primitives. Essentially, matrix multiplies, vector dot products, dense linear algebra at reduced precision. So if you can build computers or computing devices that are specialized for reduced precision linear algebra then you can effectively save a lot of energy and make much more powerful computers for the same amount of energy input. And so for example, this is our third generation of chips that we've been building at Google to help us accelerate our machine learning computations and these systems, the first generation we wrote a paper showing that they were about between 30 and 80 times more power efficient than traditional CPUs or GPUs. And they're designed to be connected together into very large configurations for large machine learning systems. And it's really important then that these systems be quite energy efficient. The other thing about this is that those earlier systems are designed for very large data center deployments in these large sort of warehouse size facilities. But it's also clear that as we get more edge devices and robots and autonomous vehicles and phones that we want the ability to run machine learning models in very power constrained environments. And so that's why it's important to also focus on these really lightweight chips or even components of chips that can run these kinds of computations sort of in an always on manner with very little energy usage. And so we've also been working at the complete opposite end of the scale to make these energy efficient chips for doing machine learning in phones and robots and so on. And with that, I think this is a very brief talk but I hope I wet your appetite for the fact that machine learning and deep learning in particular is really helping us tackle important problems in society not just computer science and that we by making use of machine learning will be able to make progress on some of these difficult problems. And with that, thank you.