 So thanks for the introduction. I'm going to talk a little bit about TensorFlow in general and TensorFlow 1.0 in particular and talk about where we've gone since we've open sourced it. So first of all, TensorFlow is machine learning for everyone. So basically, it's an open source platform that allows you to use its tool chest to basically build machine learning applications. So one of the things that was really important was that it would scale from research to production. So research is really important because machine learning is really evolving really fast right now. So there's lots of new research. So it's really important to be able to move that into production as fast as possible. And so internally, we have a huge research organization. And we want to make sure that all the research we do can get into our applications as soon as possible. And that's also good for everybody else who wants to use TensorFlow 2.0. But basically, the core tenets of TensorFlow are that it should be fast, it should be flexible, and it should be production ready. And those three things together should kind of sound a little bit weird because usually these are trade-offs. Usually you can only pick a couple of those things to have them at once. But one of the things about TensorFlow's engineering was that it was able to achieve all three of these at once. So one of the great things about TensorFlow is that it supports many platforms. So in particular, you can run it on different hardware architectures like CPUs, GPUs, which are really important because they're so fast that they enable a lot of deep learning applications to now become tractable. And internally, we also have specialized TensorFlow hardware called TPUs, Tensor Processing Units. And other companies are working on these types of things as well. So specialized hardware for these new types of machine learning applications are going to become really important. And TensorFlow is going to be the place that these companies are trying to target. So once you have a TensorFlow application, you should be able to run it on any type of platform. In addition, you can run it on your computer, your Linux computer, your Windows computer now even. But also, you can run it on Android and iOS. And there's some example applications that you can find on our website. In addition, there's lots of interesting examples of people running on Raspberry Pi, really small system on chip, little Linux computers that can do a lot. And these are so cheap that you can deploy machine learning models that you train anywhere, which are really exciting. So multiple languages. So when we came out, we had C++, which is the core implementation of TensorFlow. But most people were building models with Python. And this is still the case. But one of the things that we've done for 1.0 is we've built an API, a C API, that is a great way for other languages that support a C A B I to plug into them. So now we have Go bindings, we have Java bindings, we have Haskell bindings, and there's even R bindings for you statisticians. In addition to the core toolkit, which you can build machine learning applications, one of the most important things is to be able to visualize models, to understand why your models behave in a certain way. So one of the things that we put out with TensorFlow was TensorBoard. And this is basically a web-based front end that can read TensorFlow checkpoints and can actually show you information. So you can put little breadcrumbs into your TensorFlow models that record histograms, that record prints, that record images, and then you can see them visualized in the central place. So this is a great way to compare your models. This example, however, is like a really advanced visualization, and we're moving towards plug-ins where you can actually extend the visualizations to do whatever your domain needs. So in this case, MNIST, which is basically handwritten digits, and you're trying to classify them, what is their actual symbol? What is their number? And here we're seeing a data set where it's been visualized in 2D using a principal components analysis. And what this does is it allows you to see the clustering, the separation that's incurred by the running the model. And you can zoom in, and you can actually look. If there's an outlier, why is this an outlier? By looking at the image directly. Normally you would have to go and you'd have to figure out, oh, it's example number 30. Now let me visualize example number 30. This is just way more direct and really much more exciting. So this is what TensorFlow basically is, but what has it been used for? So if we look at some of the applications that Google's used TensorFlow for, one of them is to increase the accuracy of neural machine translation. So using TensorFlow model, we were able to reduce the errors in language translation from 55% to 85%, especially between English and Chinese, which is very difficult generally. On the lighter side, you can actually do artistic expressions. So stylization is something that machine learning has been used for, typically, but usually you have to train it on a specific style. So you have one style, and you can make a specific model, and you have to retrain it all from scratch. One of the cool things about this research that was done at Google Brain is that you can blend the styles after the fact. So you can say, I want 50% Van Gogh or 50% Japanese wave. And what will that look like? Well, you can try it in real time, and that makes it a lot more approachable. So people are doing all kinds of interesting thing with TensorFlow, and one of the great things about open source is that once you do that, people have applications and ideas that you never can imagine, and it just builds into a snowball and becomes even more exciting. Another area of interesting active research is how to build more sophisticated TensorFlow models. So one of the most difficult things about machine learning is that you have to build the architect. You have to decide for an image model, how many layers should I have? How should these layers interact? What size should they be? How many skip connections should there be for like a ResNet or things like that? And what some of our researchers did was they trained a reinforcement learning which can actually produce a TensorFlow model. So it's like one step removed. It's a model that was trained in TensorFlow to generate a TensorFlow model, and that can give you a much more direct way. In addition to image models, they can actually train something like a recurrent neural network, a replacement for an LSTM that's actually more efficient than an LSTM using, again, reinforcement learning. In addition, we're doing some research on genetic algorithms where you can start from a very simple TensorFlow model that is very bad at classifying images, and then you do a bunch of evolutions, and eventually you end up with something that's very good. Another really exciting area of research is the combination of multiple models. So suppose you have an image model. You might be able to classify that into some set of fixed classes. But what if we combine it with a language model? If we do that, we might be able to actually generate captions. So here on the left, this image was automatically captioned as a blue and yellow train traveling down train tracks. And on the right, a person on a beach flying a kite. And these are really accurate captions and automatically done using, again, a TensorFlow machine learning model done by our Google Brain researchers. So I told you a little bit what TensorFlow allows and what people have done with it. But how do you actually do it? So I'm just going to give a brief overview. You can learn more about this on the website, which is the basic idea is that TensorFlow runs graphs. Graphs of nodes, and each node represents a computation. And an edge represents a tensor, and the node represents some computation. So you might have two tensors coming into an add node, and that would add the two tensors. So the way in which you construct these is usually with a Python program. So on the left, if you have this Python program, it will turn it into the graph on the right. And the advantage of switching it into this graph formation is that it can be scheduled on the computation resources, be them a single CPU, a single CPU, multiple GPUs, multiple CPUs, or even multiple CPUs and multiple GPUs all intertwined. And you have control as a user to do this. And we're also using reinforcement learning to improve automatic placement. If you have multiple compute resources, what should you actually schedule this? Or where should you actually schedule? So that's basically how TensorFlow is programmed. So let's talk about the progress to date, which is leading up to 1.0. So when we started with TensorFlow internally, we hadn't released it open source. The question was, how fast would it pick up internally in Google? And if you look at the graph, really fast. And in fact, it's in a lot of our applications, including Search, Gmail, Translate, Maps, Android, Photo, Speech, YouTube Play, and many others. And basically, all our researchers, many of them, are actually building new models, and they're building them in TensorFlow. So it's enjoyed a great usage internally. Externally, it's become the number one GitHub repository for machine learning. So in addition to that, we can look at the trend since it was open source. That's that really steep point. At the Dev Summit, which was maybe about a month ago, we had 44,000 stars, which is now increased to over 50,000 stars. So that's pretty, that's a lot of stars. What does that mean in terms of interaction? So let's look at, we have a lot of Google developers working on TensorFlow, but there's also been 475 plus unique non-Google contributors to TensorFlow. In addition, we've had 14,000 commits in the last 14 months. And lastly, we've had just a huge amount of engagement, lots of YouTube videos, tutorials, models, translations, and prodigies. In fact, if you search for TensorFlow in the GitHub repository, you'll find 5,500 GitHub repositories that include the word TensorFlow. In addition, we've had a lot of direct engagement. We take the open sourcing seriously. We're not just dropping the code and forgetting it. We're engaging with users and developers who are using TensorFlow. So in particular, we have an active Stack Overflow community of over 5,000 questions answered. And a lot of them have been answered by actual Google engineers. Similarly, we've had over 5,000 GitHub issues filed and answered. And that includes 160 new issues per week as of late. And these are all looked at at least once by us. And so that level engagement has given us input onto what we should be focused on. And it also, I think, makes the community a lot more happy with where TensorFlow is going to have this type of interaction. In addition, TensorFlow has been used for a lot of courses, including at University of Toronto, Berkeley, and Stanford University. So all of this has really happened in the last year, in less than a year. And as I said, the interaction with the community has been really exciting because it has kind of directed TensorFlow. We focused on a lot of things that were requested by the community, Python 3, HTFS support, new versions of CUDA, better iOS support, using Max with GPUs, et cetera, and higher level APIs. So what can we do in the next year? That's kind of up to you. And we're excited for lots of new contributors. So let me talk about the features in TensorFlow 1.0. How many of you have actually used TensorFlow before? So not nobody, but not everybody. How many of you want to use TensorFlow? OK, so that's a lot. So what's happened in TensorFlow? Well, the good news is that we've made it a lot faster. So we've taken a look at the most commonly used models, and we've actually managed to achieve a 58x speed up on 64 GPUs for Inception V3. So that near linear scaling is really good. In addition, we've worked to make the APIs much more flexible. Specifically, we've added higher level interfaces, which makes it much easier to start using TensorFlow. And lastly, we've worked to make it more production ready. We've stabilized the API and ensured some backward compatibility in the Python API. And in addition, we've introduced a serving API, which basically allows you to do a lot of the tasks that are related to taking a model from where you've designed it in research to actually serving it on an actual application. So let's drill into the speed a little bit. So if you look at the speed, one of the things that we've done is the optimized model, which I've already alluded to before. And we continue doing this. We're going to continue looking at the models that people find most important in making sure they perform well, and in addition, trying to make the performance as good as possible so if you write a new model, that it performs well. In addition, in the long term, and even in the short term, we've developed the XLA compiler. And this is basically a TensorFlow that can target this compiler. And parts of the graph that were run as many individual ops can be fused into one set of one op that is backed by assembly code. So you get specialized compilation on the fly, like a JIT compiler, very similar to, say, how a Java compiler works. So in addition, this is also really useful for pre-compiled models and small devices like foots. OK, I'm going to skip that slide. OK, and that one as well. So in addition, we've made the API much more flexible with higher level interfaces. So when we released TensorFlow, it started as this kind of layer stack. So there's the low level implementation of the kernels that are used to implement the individual nodes and ops. In addition, there's an execution engine. But then there's the Python API and the C++ API, which actual developers interact with. What we found is that as people use TensorFlow more and more, they started doing very similar things. They needed training loops. They needed simplified ways of describing their image models. And these patterns continued. And there was several libraries that emerged that tried to make these things easier. What we've done in 1.0 is we've introduced the layers API, which takes things like convolutions, max pools, average pools, and these kinds of things that are used to build image models and also primitives for RNNs as well. And we've put them into this part of the API. And this is really easy to use. In addition, we've built even higher APIs. So estimators allow you to automate how data comes in, how you do training loops, et cetera. And then Keras, which is kind of a standard spec for how machine learning models are specified, has now become something that we're going to integrate as tf.keras, which is going to make it really easy to build complicated models. If you're really interested in what you can do with very little code, you can actually look at the TensorFlow Dev Summit and look at the Keras talk. And it shows how to combine a language model, like an embedding model, an image model, and a video model all together. So it's really quite sophisticated. And it's really easy to do. And then at the top level is canned estimators. When we came out with TensorFlow, one of the things that was really exciting was the inception model, which is a pre-trained ImageNet model that people have been able to apply. So they've been able to build cell phone apps or Raspberry Pi things that recognize images by taking this pre-trained model. And training inception takes some time, but you can take the inception model and you can actually specialize it for specific things. So there's an example on our website which shows how to specialize it for flowers, for example. So if you have some set of images and you want to make this classifier better at that set of images, you can take this canned estimator. And if it's not good enough as it is, you just give it some different data and retrain a little part of it, and that can give you a much better result. So we're hoping to extend that even more and provide more models. And we've also open sourced several other interesting models like the Syntex Parsing Framework, which is really cool. OK, so let's look at the flexible high-level API what enables. So if you look at our Getting Started, which is the first tutorial on how to use TensorFlow, it shows how to use the low-level Python API, but also shows the equivalent in the high-level Python API. And if you compare the lines of code, don't try to internalize this code. The main point is that with a high-level API, you get more result in less code. But if you need to dive into the low-level API, you can certainly access that. So you can start with a high-level API and then go as low-level as you need to, depending on if you're doing really custom things. So lastly, I want to talk a little bit more about the production readiness of TensorFlow. So specifically, TensorFlow, going to 1.0, we did some changes to the API that were not backward compatible. We wrote an upgrade script that allowed us to take pre-1.0 models and update them to 1.0. So if you have Python code that is for an older version of TensorFlow, you run this thing, it will change most of your API calls to the new thing. Now, once that we have 1.0, we're not going to actually change those APIs in a backward incompatible way throughout the cycle of 1.0. So this means that you don't need to do this for a long while now. In addition, if you have saved models, they will also continue to work. And for details on exactly what the backward compatibility strategy is, you can go to the web page listed in the API compatibility section. One more thing in TensorFlow 1.0 is TensorFlow serving. I mentioned that this makes it easy to deal with multiple models, the validation issues with models, being able to deal with low latency requests and to handle all those issues. So basically, if you need to go from research, this is where you should look. And there's a great blog post that details all the things that can do for you. OK, so I've told you what's in TensorFlow 1.0. I just want to go through a couple more case studies that are really interesting applications of machine learning and, in general, in TensorFlow in particular. So TensorFlow at Google. So one of the coolest apps, I think, is this one, which you can translate signs. And this is a good blending of image recognition techniques, language modeling, and also computer graphics, because it's able to recognize the original sign, find the text in it, put it through the language model, do the translation, and then re-render the new text over the old text. In addition, if you use things like inbox smart reply, that's also done in TensorFlow. And the common thing with all these image icons is that they're all Google apps, and they all have some part of TensorFlow and deep learning in them. So TensorFlow is for everyone, though. And now that it's open source, there's lots of interesting things that are coming up open source. So in terms of helping people, the case study of medicine. So one of the things that researchers at Google have been looking at is, in diabetes, there is a tendency to have these hemorrhages in the retina. And if too many of these happen, they can actually cause blindness. And it turns out that if you detect this early enough, you can, there's some inexpensive drugs that can combat this. But the key is detection, and it involves sort of an expert ophthalmologist being able to detect this. So we had a project where we detected this automatically and at a similar air rate as a human doctor. So that's really exciting because it means that more of these cases will be easier to detect and prevent people from going blind. In addition, not everything is so serious. There's things like art, which are being done with TensorFlow. So if you look at music and you have a sequence of notes, what is the next note is the question. So a trained musician or somebody who knows music theory could do this well, but I might not be able to. But the question is, can we train a model that does? So this is the goal of the Project Magenta. It's basically to take about 10,000 works of music and throw them into a model, a sequential model that can predict the next sequence of music. And they had a, like I think you can find it on YouTube, which is a demo where you play a little bit of music and then it tries to imitate you. And one of the neat things that they did in their paper is they did a study. So I'm gonna hopefully be able to play this music. Maybe, yeah. So the idea was to generate Bach-like music. And this is generated with their system. And what they did is they did a user study. They asked people, does this model or does real Bach or some other models sound most like Bach? And interestingly enough, their model was chosen as the most Bach-like. So this asks a question. So why would that be? How can that be? Because Bach is Bach. Well, there's some speculation on that. But one of the things that you could think about is that Bach is an actual composer that goes through different phases and periods. So as they evolve, they change their style. So if you look at early Picasso versus late Picasso, the imagery is very different. And that's similar for composers as well. So in a sense, once you think of Bach, you think of the average Bach rather than all its works. But the computer's view of it is much more precise. So there are some interesting applications, hopefully. And what's new in 1.0? I hope you've enjoyed this talk. And if you're interested in getting involved in TensorFlow, there's lots of things that you can do. So specifically, you can do tutorials and code. It's all available on TensorFlow.org. There's lots of resources there. If you're totally new to ML, you can do the bottom link, which basically is a bunch of YouTube videos that we've prepared, which give you a very gentle introduction to machine learning. There's also a Udacity class that uses TensorFlow behind the scenes, but also teaches you the basics of machine learning. If you want a little bit deeper, a deeper study, you can look at Stanford's CS231 class. And Udacity also has a more advanced machine learning class as well. So with this, there should be enough resources to keep you busy for a while. And of course, you can find many, many other resources on the web. So with that, I'm open for questions. I have a question. Would you talk a little bit more about the status of Paris API in master? Yeah, so it wasn't quite ready in time for 1.0. I think there's basically a change being prepared for that. And it's probably the next month or so, I think. But don't quote me on that. Are these free for the Udacity Nano degree? The Udacity Nano degree, the course is free. If you want a certificate, I think you pay for the certificate. But the learning resources itself is free. But I think there are numerous resources today if you want to get started on machine learning. Yeah. Someone developing for the C++ API, right now it's a huge difference between what it's accessible and why it's not. Yeah, so the question was, basically, is there any plan to make the C++ API more complete so you can do more stuff in C++? And the plans together, I was alluding to the C API and basically becoming more featureful. Once that's more featureful, all languages will be able to be more featureful. The idea is to put as much there as possible. So there were, I think, one of the main things that's missing from the C++ API right now is gradients. And that's an important feature for building new models. So that is planned, and that's under work. And yes, we definitely want to make the C++ API better. Does that answer your question? What do you think are more specific? I think your question is, what do you think are more specific use cases or applications of TensorFlow? Yeah. So I gave some examples. Do you have anything in mind that you, I think the sky is really the limit in terms of what you might be able to do. But a lot of the applications that are most popular right now are sort of image-based applications like Vision, where you have convolutional networks. And that enables a lot of classification. But I think the combination of language and image models is really exciting, and we'll allow more applications. I don't know if that. The other way to also look at it is if you look at that chart where he showed, OK, so many Google products use it. And literally, they're all in multiple use cases. You see it in email, you see it in photos, you see it in analytics. So it's really universally applicable, and that goes with the tagline of machine learning for everyone. So just like mobile phones and apps can be used in literally every domain, machine learning is nearly at that stage now. Whatever business you might be in, whatever use cases or domain you are in, there is a scope for machine learning to be applied there. I mean, I guess I can elaborate and just say that TensorFlow is, it allows you to build arbitrary computation. So in a sense, it can do anything. What machine learning applications tend to be good at are things where there is uncertainty or data will help you. Things where you have a mathematical governing equation, so if you have an ordinary differential equation, that's usually done really well with traditional methods or if you have traditional arithmetic, where you can logically lay out exactly what it should do. So anything that you can think of like that would be a good application for machine learning and thus, in terms of TensorFlow. Yeah? Yeah. Can you have a link to your slides? Do we have a link to your slides? I think that I don't have them up online. That's like a separate, we'll have to look into if I can release them. But they will be part of the video, which will be part of it. All this is being recorded anyway. Yeah. All right, so thank you so much, Andrew, for sharing about TensorFlow. Given that we want to set up.