 So thank you, and thanks very much for having me. I mean, it's kind of a long story with Philippe, but I'm really glad to be here, and thank you for being here as well. I was at the RFID conference, an RFID conference last week, and they kind of coined the term IoT. And I started by saying, hey, you know what? IoT is going to disappear. So they kind of like that. And the reality is, I think IoT will disappear, because we have to make it disappear. Because if you think about it, it's not about connecting things so much. We've been talking about billions of devices connected. I stopped showing slides about billions of devices being connected. It's really about what you can get out of IoT devices and IoT deployment, and really focus on the outcome that you can get out of IoT. Think about your mobile phone. You don't really think it's kind of Bluetooth Wi-Fi sale, or you don't really care. What you care is when you use your Google Maps, you kind of get to the place you want to go. It's when you look at your email, you get that. It's when you chat, you get your chats. It's not so much about protocols. It's not about connectivity. It's about what you can get out of that. And that's really the important thing. An IoT journey for a customer is really about the outcome, not so much about how it's done. So if you think about different use cases, think about trying to imagine a city with no congestion. That's something that IoT can get through connected cars and better traffic management, et cetera. Think about airports with no delays. Some of you probably flew in today or yesterday. And if you get no delays for planes, because the engines are working well, the baggages are flowing, the costumers, the passengers are flowing, this is what IoT can deliver. Imagine energy consumption everywhere at optimal level for buildings, one of some of the largest polluter in the world. So how can they get better energy consumption so you don't pollute as much? And think about medicine. How can you get medicine tailored really for you? That's also an IoT use case. Think about it, it's knowing yourself, knowing, having samples of data across population to really understand what fits for you and your profile. So when we talk to enterprise and customer, it's really about those three problems they're trying to solve. One is reducing risk. The other one is optimizing cost and the last one is growing their business. Reducing risk is really better understand the systems and what they're using to make sure they can predict any failure. They can have a better visibility of what's going on in their systems and avoiding any downtime or anything like that. The second one is optimizing assets. There's a finite amount of resources that they have at their disposal. It's either money or assets or people. They need to really optimize that and get the most out of it. And by understanding what's going on, they can really optimize those assets and optimize their cost. And growth is really about creating new business models and we'll talk a little bit about insurance at some point. It's how do you change your business model through better understanding of the data of anything that's happening in the ecosystem you're in. So those three are really what we're talking with customers and enterprises. It's not so much about how am I gonna connect my things? How do I deal with security? It's all about outcome for them. And that's through outcome that we can actually sell those IoT projects and deploy those IoT projects correctly because the ROI becomes justifies. So when we look at the three challenges, here you go, the three challenges of IoT, it's always kind of the same across every industry across all customers. It's about security. How do you securely connect devices? How do you secure access to the devices? How do you secure communication and control of those devices? It's about scaling and we're talking about big data. It's fairly easy to connect a few hundred thousand devices, that's fine. When you start hitting a million, two million, five million, 10 million devices or very high throughput, for example, video streaming from cameras, that becomes really, really hard to do. And that's becoming quickly a problem for customers. And finally, why are we even connecting all those things? Why do we deploy scalable infrastructure is to get insights out of the data. And without insights, there's no IoT really. There's no point of connecting things. You need to get the insights to justify the ROI and improve your products, improve your customer engagement and improve your business overall. So I'll start with data, lots of data. And what we see, and we're seeing it quite a lot at Google, is across all industries, you're getting an amount of data that's unprecedented. We're shifting from human-generated data where you were swiping on phone, tapping, clicking, loading pages, generating logs out of that, or you're interacting with some devices that are connected in your door that opens or valve that you turn on. That's human-generated data, and that's fairly discreet and limited. Now we're shifting to a world where the data is generated constantly 24-7 by machines at very high scale. And if you look at different industries, manufacturing is an example. There's a customer of ours that has a machine that makes paper to create diapers. And those machines generate about four terabyte of data per year, and that's per machine. So they have a plant of machines. That's pretty quickly a lot of data that they have to manage. In healthcare, we have a customer called Dexcom that's working with Verily, one of the alphabet medical device company. And those guys are doing glucose monitoring for diabetes. And the amount of data they're generating is about a terabyte per month for those glucose monitoring. So it's a little bit of data per person. There's so many of it. And that's compressed data. So when they uncompress it to analyze this it's about 10 times that size. About 10 terabyte of data per month generated by just glucose monitoring. It's a pretty staggering amount. In transportation, we think about autonomous driving. But there's a truck, there's a flight, there's planes, there's boats, fleets. A lot of the assets that are moving generate a ton of data. And more and more actually, every new car, every new truck has more data and more sensors than before. That's about 22 to 25 gigabyte per hour potentially of data generated. And that allows insurances to get more insights. For example, how do you drive? How do you brake? How do you accelerate? How do you turn? How long do you drive? Where do you drive? In what condition are you driving? All that can influence, for example, new business model for insurances. Insurances also can ensure, for example, in a fleet management system where you're transporting perishable goods. If the truck fails, the entire shipment can be thrown to the trash. So insurances could be adapted based on how you maintain the truck, not so much on the workloads. It's kind of an interesting model. In consumer product, we have a customer. It's a toy company. That's an interesting one because toys, games for kids, now it's all on iPad and phones. Like my kids keep playing on the phones. It's kind of annoying. They kind of lost the use of using real toys and real life toys. So that company or customers is building connected toys that allow the kids to play with physical toys but with other players across the globe via the platform. So those product lines we're gonna generate to about four gigabytes per day per product line. So again, it's a data challenge for them. And when you start collecting that amount of data and you're talking petabytes of data or exabyte of data, which is the scale that Google operates, you're really starting to master big data. But more importantly than mastering big data, you become a data-driven company. And that's really what changes the game in how you can deliver business. Google has done that for quite some time. We have about seven products today that have over a billion active users on the platform. So that's, you're talking Search, YouTube, Gmail, Android Maps, some of those products. So we've been doing big data for quite some time. Actually, that big data has allowed us to supercharge innovation. And you think about the innovation that was generated by Google. We've been at it for about 15 years. Through examples like the Google FIES system, that was in 2002, I think we launched the Google FIES system. That was a time where people were deploying NetApp racks of hard drive, where Google was taking cheap hardware and doing a distributed file system across all our data centers. So it was a complete shift of how you handle big data already in 2002. That led to paper, that led to a distributed system, processing system across that cheap hardware, which was my produce. A lot of paper published. Yahoo took that over, created a loop. And then it took about 10 years from 2002 to about 2010, 12, where a lot of enterprise starting to use a loop at scale. That's 10 year lead time of Google versus the industry where we've been doing this kind of big data. Same thing for Bigtable, in 2005 became HBase, became the base for HBase and Cassandra, and so on and so forth. So those technologies that we're gonna be using for 10 years ahead of the industry, you know, in 10 years at that time, about 50% of the Dojo's company got taken off the list. And you can think about the technology choices that those company have made. That's a big, big shift. And what we've done is really trying to, as we move from just Google.com and kind of our services at Google too, an enterprise player with Google Cloud Platform, we're trying to take those services and make them available to our customers through services like BigQuery, which is a petabyte-scale data warehousing tool. If you do a query on BigQuery, you actually use Dremel, right, in the back end. No data flow, Bigtable is the same. If you use Bigtable, you're gonna use the actual Bigtable that we've had for so many years. So those services become available to our customers. Talking more about IoT, we just launched last February in GA, a new service called Cloud IoT Core, that's more relevant to that audience. Cloud IoT Core was announced in May and went beta in September, so now GA. This is really the service that allows you to connect millions of devices to the GCP Cloud. Bidirectional, it's an MQTT bridge. HTTP bridge handles the authentication, the device management, and all the communication. What's very interesting with IoT Core is that it's one of those services alongside PubSub and others that are built on top of the platform. And that's a very, very different thing that you have to think about when you think about the Google services. A lot of the cloud vendors build their main business on top of their cloud. We build our cloud on top of our main business. That's a completely reversed infrastructure. Because the Google infrastructure was built to handle billions of users across the globe. Building GCP on top of it actually allows us to have the only own and operated fiber network across the globe. What that means is that you have about 100 point of presence that all devices for IoT Core can connect to. When you go on your web browser and do Google.com, no matter where you are in the world, you're gonna land to the closest pop. That's really what it does. For devices, it's the same. With IoT Core, you have an MQTT.GoogleAPI.com. Across the globe, you'll always get to the closest pop and then get to the region you want to through our fiber network. That allows you to reduce latency, reduce manufacturing costs. You don't have to think about, where is that device going to be shipped? You don't have to think about that. Everything is global and you don't have to worry about where things are. They're just gonna connect to the best place. Let's talk about security. Talked about data. Let's talk about security. Very, very important to actually dear to my heart. With IoT Core, we chose to use TLS for connectivity, for encryption, public key infrastructure, so public and private key for devices. But we tried to not blow devices. So we're not using the material authentication of the TLS stack. We're just using TLS to establish a connection with the Google front end. Then, you'll create a JOT token, a little JSON payload, and you're gonna sign that JOT with a private key that sits on the device. And pass that as your password for the MQTT Connect. There's no handshake. There's no hosting of the certificate on the device. There's no hosting of the public key on the device. The TLS stack is much smaller. You're less dependent on the TLS stack. You can use any TPM you want. And we have a partnership with Microchip that allows you to use their crypto to sign those JOTs and secure the private key, et cetera. So what you're getting there is something that I find pretty fun. We've been able to create an 8-bit MCU with a Wi-Fi module and a crypto chip. That's a reference design that can connect securely using the public-private key TLS to IoT Core bi-directionally with an 8-bit MCU. I don't think anybody's been able to do that. And you have to admit, this is pretty fun. This will actually allow you to do very, very wide-scale deployment of connected devices for command and control or sensing. And that's because the stack actually fits in about 10K, so pretty small. Some examples, smart cities. In smart cities, a customer of our smart parking technology is building those sensors that you put on the ground and can see the cars if they're present or not. They switched from their own backend to IoT Core and the rest of the GCP. They've been able to go from one to two-week deployment of a parking lot to two to three days because of the simplicity of deployment before they had to deploy and configure each of those little sensors in the ground. Now they just configure them at manufacturing time and deploy them. That's a 50% reduction in time to market. That allows them to go in the UK, in Netherland, in Australia. I mean, they're deploying different countries really, really fast right now, thanks to that. They've been pretty vocal about their deployment. That's a cool customer example. So security, big data, security insights. Those three pillars. Insights is all about data analytics, ML and AI. AI has been identified as the new competitive edge. A lot of the companies that use AI in their business, 46% of the IT leaders say it is a competitive differentiator. That's pretty staggering as a number. And actually 50% of them, of the people who use AI can quantify the ROI. And that's very important because if you can't quantify your ROI, you're not doing the investment. So it's faster decision. It's faster execution. More data-driven decision. That's pretty impressive numbers in the benefit of AI that you can get. And let's take an example with Airbus. So Airbus has been working at a problem for about 20 years. What they want to know is when they look at a picture like that, and they were doing it manually at start, they would get the picture from the satellite, and what they want to know is that snow, is that water, is that cloud. They were doing it manually, and then they started to grow the number of images to a point where it was 10,000 images a day they had to process. That was almost impossible to manage. They created some algorithms and ML model and got to about 13, 14% error rate on those images, which actually was pretty good. It took them 20 years to get there. So they came to Google and said, hey, can you help us with this? Like, what can you do? What we've done is apply those images to our cloud machine learning engine, run those models in about a month with one engineer. We got those down to about 4% error rate. We got the training time from 50 hours to 30 minutes. That's really much faster, which is pretty cool, because time for AI is really important. If you're building models and you have to test them, train them, refine, you can't just spend your hours and days on the desktop to train those models. It just doesn't work. The data scientists are twinkling their thumb, waiting for the model to be trained. You need to go fast, and that's how you refine the models and make them more accurate. So in this case, those are the clouds up there. So I don't know if you saw them, but this is what the algorithm is giving them. So this has been very, very interesting. You see in a month, the benefit they've got out of AI and using the Cloud ML engine. The problem of machine learning today is that you need data scientists. And they're really nice people, but out of 21 million developers, there's only about a million data scientists. And if you talk about deep learning researchers, that's in the thousands of people. It's very, very little, super hard to get any data scientist or deep learning researchers. So the problem is, how do you get that power to the masses of developers? So we started by doing AI building blocks. Those are building blocks in cloud translation. You have a text in English, and you want to, in French, to send it to an API while returning in French. That's pretty easy. You have a vision API, send an image. It will tell you what's in the image. It's a dog. It's a cat. A cat we know really well now. People, you can get a lot of things out of the cloud vision API, but for those, you really don't need any machine learning knowledge. You just send what you want. It returns a classification with probabilities. So we have natural language that's more to understand what's being asked, the cloud speech, the voice to text, and the video intelligence to know what's happening in the video, which is super important. We've been able to do those because if you think about video intelligence, for example, we have YouTube videos. We have millions and millions of YouTube videos. So we have a very, very big training set that we can train on and really get those super, super accurate. The problem is, when we show, for example, the cloud vision API, to companies, they say, oh, it's great. You can find cats. But what I want is really to find out of my manufacturing, what are the elements that are not produced correctly? Which one do I have to discard or not? That's really what you need to figure out. So the vision API is not enough. So we just released Cloud Auto ML, which builds on the foundation of the APIs, but allows you to create your own models out of the APIs that we have. So very easily build models to a graphical user interface. You can train your model very easily. You need limited ML expertise. Our model in the back end will reprogram itself and provide you a API for the specific use case that you want. So that allows you to give access to people that are not data scientists to all the power of the Google Cloud platform in machine learning. The cool thing about it is behind it, we realize that when you do those trainings, you can use CPUs, you can use GPUs. That's great. But even with all the GPUs in the world, you can get to where we need to be in terms of training models. So Google's been set at making that better. And we created our own processors that are focused on TensorFlow. So we call them a Tensor Processing Unit. Those are actually pretty big. They're about that big. They don't run on battery, I can tell you. And they need to be cooled pretty heavily. Those are specialized TPUs just for training and doing inferencing on the four-tenths of flow. So you manage them from the cloud, you locate them to your workload, and really can accelerate, I was talking about time, you can really accelerate how you're processing those models and making them more accurate through this kind of machine. So we have new versions coming every time. But we're using those already internally. Those have become publicly available to our customers. That's really to improve the speed of getting those models on. One example of using machine learning model that I really like, this little company called Odin Technology. It's kind of a small company. And in one of their talk, I really like how they introduced it. They said, we're working with the 99% of manufacturing. That's because not every manufacturer is Tesla or Toyota with robots that are building stuff on just-in-time provisioning of parts and all automated. That's super cool. But really, most of the manufacturing, it's not that. They have old machines. It's kind of greasy, 15, 20-year-old machines. They do collect data locally on the machine. So there's a little device there that you can see. But it's not connected. They don't know really how to optimize those machines. So Odin went with a Raspberry Pi, a ruggedized Raspberry Pi in a little box. And they thought they would fail with this, because they thought, oh, no, industrial Raspberry Pi. I don't want it. Actually, it's very successful. So they plug that to machines, get the data in their cloud, train models on Google, and then provide those insights to customers. So in this example, one of their customers has a plastic molding machine. So they put little plastic little balls there. And it goes in there. It's melted. It's rolled. It's expanded. And then it's molded. There's several processes to that machine. And just by optimizing the changeover, when do you do the melting? When do you do the roll? When do you do the molding? And how do you change the machine configuration? Just by doing that, they save 3,500 hours of production time per year, which is about $1 million of production that that customer has gained, just through insights of changeover. That's a pretty staggering benefit, just applying AI, going through Cloud IoT Core Pub-Sub and Dataflow and some of the AI engine. So to conclude, if we want IoT to disappear, it's really through making the three pillars of IoT seamless. So the IoT connectivity, we talked about security and scale, the big data for storing and analyzing big data and all the tools you need that, and the machine learning model to really get the insight. And it's when the three together are seamless for customers that then IoT starts to be disappearing, and we only talk about insights and outcome for customers. On that, thank you. And if you're interested in what we do at Google and IoT, this is cod.google.com slash IoT. It's pretty easy to get to. So thank you very much.