 OK, sounds like I'll get started. And I'm told other people start rolling back in. So anyhow, I want to spend a little bit of time with Ascom Chat a little bit about what Red Hat and Google have been doing, specifically in the OpenShift arena. So what I want to do is give you guys a little bit of sense of the work we're doing and perhaps a little bit of details around where we're headed. So a couple of things just heads up if you have not yet visited. Head on down to our booth down the way. You can sign up. You can do some test labs. And you can pick up an AI kit when you come on down. So that's our lab. You'll see a whole bunch of other stuff we're doing, a lot of keynote tomorrow, which we'll talk additionally a lot around how you get started with machine learning. This said today, I'm going to send the foundation for tomorrow's keynote, which will give you an idea of what we're doing up today. So we look at the world in a very similar way to what Red Hat, how Red Hat covers it. And that's very much open first, open always, open everywhere. And so we work with Red Hat in every one of the open source communities that you hear them chat about. The big one, of course, being Kubernetes. So we'll spend a little time there. I think the question people usually ask is, you're here with Red Hat. Obviously, you made that choice. The question is, why would you work with Google and Red Hat? Why are these two and how might those two actually fit together? So first off, lots of time with established relationship. This is one that goes all the way back to core Linux kernels. So if you're going to ask, hey, by the way, how did these little C groups and containers show up? That's a Google contribution into core Linux. And it's one we've been working with, obviously, with Red Hat ever since. This is a little bit of history from where we came, all the way back to the world of KVM, all the way through all the work that we've contributed into Cloud Forms, into Ansible, and obviously all the Kubernetes stuff with OpenShift. Engineering's there. I think this is actually quite drastically different from the engagement models. I think we may see with other providers. And so if you look with a lot of other providers, Red Hat does an awesome job of getting to run on their platforms. One of the things that's interesting, if you look back historically, OpenShift has been dedicated and all has been running on Google since 2016. And that didn't require much engineering on Red Hat's part because, surprise, surprise, we already do that, right? The two teams, if you want to know, jump in GitHub and you'll find 20 different projects that we're working on right now just under Kubernetes, special interest groups that Google and Red Hat are co-leading. And so we do that engineering every single week. Every single week we have those tie-outs that make sure the products align, that make sure that we're headed in the right direction. And that just means when Red Hat shows up and wants to run anything on Google, functionally that's all they've been doing. That's what our engineering has been. So there's not a separate team to do it. That's just real stream into how we execute. We have a very similar vision in terms of the view of DevOps, a lot of the investment there, the investment, like you said, with Ansible, et cetera. Here's a little bit of ideas to give you a sense of the size of the open-source commit. Kubernetes, which is now the largest project on GitHub, between the two companies, we contribute 45% of the code to this. Google will do about 38% right now. And Red Hat is by far the largest second contributor in there. We look at the Linux kernel, which is huge. Between the two companies, we're doing 10% of that. And it kind of cuts across the back. You even look at over at OpenStack. Google contributes into OpenStack. You might ask why. Again, that's, hey, we want to make sure we hit those customers. We meet them exactly where they are. And a lot of their running OpenStack, a lot of you, obviously, are running OpenStack. So we got a lot of community-powered innovation. This cuts across the big areas. We talked a bit about the Cloud Native. This is going to be down to the Fedora project. That's going to be in the Core Linux product. It's going to be GRPC for networking, big ones you're used to, but all the way out to this machine learning. And these are the areas we'll spend some time. The big, the Apache Projects, the Apache Beams, we're looking at TensorFlow. We're looking at R. Those are big areas that Google has long since invested in. And actually, two of those are primary Google projects as well. TensorFlow being the second largest project on GitHub. Beam being the universal runner that allows you to run on any Hadoop distribution. We did a little bit here. That said, let's jump into the fun stuff. The other question that comes up when you do it, which I now have a few people, so I can engage you. I'm not sure I can get you forward, but at least if you're there, I suspect in some ways the question comes, why Google? Why Arc Platform? What does it do? Why is it different? What is this co-engineering, actually? First off, just flat out best performance. Flat out best performance. And that comes from the co-engineering we do, but it's also benchmark. So if you all are looking for anything, there's a open source benchmarking. Again, it's called Cloud Benchmarker or Cloud Benchmarker. You can get to cloudbenchmark.com if you want to look at those. These are open source measurements of all the important things that you might want to do. So anything from boot times to your Linux kernel to your speed of your Mongo cluster to your network throughput. These are all pieces that everyone cares about and are really important. That's something that we've invested in. It's something we do very well. That's one that can be benchmarked to open source anytime you want and best in class. One last, well, I want two pieces. There we go. Hun will invest a little in security, so I'll spend some time there. Reliability. We own our own network. We have best in class operations. The world that YouTube sits on is one that you guys get to leverage as well. So a couple of pictures here when you look at it. This picture is relatively easy at first to see. This looks like everyone else's diagram that says, how many zones do you have? How many regions do you have? The big blue dots with little numbers in them, those are regions that exist today. So functionally, I think if I'm looking about the only interesting place that we don't have is good Africa coverage from a core place that we have data centers. Blue ones already built out. They're ready to go. White ones or a couple more will land. We built about 10 of these a year. So this will just keep multiplying. That's interesting, but I think that's pretty similar. I think the more important piece to look at this, in particular, when you think about a global application or best in class performance you're trying to deliver to your retail customer, is all those little small dots. Because the small dots, when you look at them, those are the points of presence. And all those points of presence come from an investment in YouTube, which means every place that a customer exists, they're only one hop to getting on Google's network. So you're with an ISP. You're one hop for the customer to hit the point of presence. From that point of presence, you're on dark fiber directly to the Google data center. Process out, Google data center to the end of the pop, back to customers. So that's four hops. That does not exist anywhere else in the world even close. So couple on top of this, the best CDN in the world. You have the best CDN, fastest network, fully encrypted in transit, in transit. I didn't say at rest, in transit. You have the best security, best speed, best solution, best place to run your open shift bar none. So that network, it takes huge investment. It's also required years and years and years of refining that we put out there. And that's a piece that you would work. Other one comes up a lot. So I talked a bit about performance. Let's be honest. We also care a whole lot about price. So if you look at a regular pricing standard, a standard provider will give you a price. With no work whatsoever, and this isn't really as obvious, but so let me walk you a little bit through this, we have a whole bunch of stuff from per second billing, committed use discounts, other pieces that don't exist. And on average, we're looking at most customers looking at about 60% better price performance running on Google. And I'm sorry, I should say 60% better price with better performance. And all of this is done in something that we believe to be significantly easier, which is, hey, if you aren't a VM for the month, for example, where does it give you a 30% discount because you ran it for the month? And it was the most efficient platform that you could. So it saves us money, so you should save money. And you shouldn't have to do any magic, our eyes, or you shouldn't have to buy unique pricing mechanisms. You shouldn't have to have an economist on staff in order to do that well. That said, there's a couple other pieces that are pretty important if you matter to an OpenShift one, and a lot also to your data space. You can resize the disks on the fly. That's a non-trivial process, pretty damn hard, but also really important from there. Boot up times, second to none. Custom machine types. So if anyone, anyone, is anyone running anything in cloud? Just help me out a little bit, yes? Anyone looked at when you buy in the cloud, what do you buy, a 2U, a 4U, an 8U, a 6U? It's in orders of two. And then you can choose some combination of high CPU or low CPU. A custom machine type is exactly what it says. If you want a 14-way machine, any amount of memory you want to it, and any disk you want to it, that's exactly what you get. Now, there's a couple reasons that we're able to do that, and it comes down to functionally the way Google's architected. So I think in the keynote yesterday, we went through a lot of questions about what's called Kubevert. And Kubevert is a VM running in a container. Great idea, but if people like I choose that, how do you do it? Well, the funny thing is, everything in Google that's a VM runs in a container. And we did it for a really interesting reason. We had an option, if you tell the story, if we go back a bit, if you go back, say, six years, when we first decided we were going to roll out VMs, we had a really easy decision, or a tough decision, we should say we could make. Every other public cloud, and the way you solve to a VMs, as you lay down bare metal and throw VMs on top of it, you're off and running. Big problem was, no one in Google ran a VM. Everything was already container-based. So we made a very fundamental decision when we went about this. And it took a lot more engineering and a little bit more time to get to market. We took VMs and we put them in containers. Now, these aren't Docker containers, but you can think of them relatively simple. Google had already orchestrated this, and we have something that runs our containers in our platform called Borg. And what that means is, or what Borg does is it schedules, I think if I walk across her, I'll actually get feedback. I won't do that. Borg schedules between two and three billion containers a day. And as it schedules those, it figures out the very best host to run it on. It does all that work. By taking VMs and putting them in containers, it allowed us to do a whole heck of a lot of really important things around security, custom VMs, made sure that you had best in class service. I'm just gonna ask, anyone running in a public cloud today? If you are, has anyone had what's something called a noisy neighbor? If you don't know what a noisy neighbor is, it's in a virtualized environment when someone particularly like a Netflix happens to land on the same host you're on. And that's not good news because Netflix is gonna use every last resource that they can, particularly on the network side. And what we see as customers, generally when that happens, they end up with a noisy neighbor, but they don't know why it's there. And it's not easy for them to tell because of course it shared resources and you're not able to share what's going on. That work I went back to, why we did the work around a container-based world, we put stuff into containers, VMs into containers because it allowed us to resize them and move them. So we can move a 1080p application running in a VM, streaming a 1080p service, move it in the VM, we can seamlessly move that to a completely different host in the same region of course, customer will not see a break in any part of that streaming. And so what that means though, is that we can do other things. I mean, no noisy neighbor's nice. This problem is slightly frightening. So if you're not familiar with what you're looking at, this is Spectra and Meltdown. These are core Linux issues, they're core Linux problems that showed up early this year. Well in reality they showed up over a year ago. These right here are core in your BIOS. They are really, really, really dangerous and they're extremely difficult to patch. Interesting information, the team that found these exploits is a team called Project Zero at Google. They found it well over almost a full year before this came out. So they found it last March and you can follow all this detail. They found this problem last March. They went back, they helped build every patch that we then shared globally with all the other providers. But because we have VMs running containers, we were able to patch these seamlessly way before anyone else, customers without knowing, no hit on the downside, no pain because these by the way, generally for other providers, you saw a big hit in terms of performance. So you can see pretty regularly a customer because of the patching requirement here, meant that oftentimes you're seeing a 30% hit. We were able to patch it, do the core infrastructure and the core engineering, seamlessly for our customers, months before everyone even knew that these existed. And that's part of what that core infrastructure kind of delivers. A little bit around now, OpenShift on top of that, so that's kind of your base. Hey, what do we have, why does it matter? You know, why is the engine solid, what's your transmission look like? Once you have your engine transmission, then let's go ahead and throw this up. You guys have seen this a ton of time. You probably heard, we've heard all about OpenShift all day a week. So I'm gonna spend my last couple of minutes detailing this slide, which has seemed a little dense. So let me make sure I kind of explain what you're seeing. At the core here, you're seeing core OpenShift. This is your core OpenShift environment. It runs on top of Compute Engine. So all the things we talked about earlier, the custom VMs, the boot times, the throughput, all the things that make sure these applications have a killer experience sit on top of our infrastructure. They can lever a cloud registry, global cloud registry, sits with our persistent storage with Google Clare, and then of course, the whole networking layer on the top, right? So you bring the same thing you've ever gotten. It's been abstracted. We've worked with Red Hat. They do an awesome job. These are things you would never see in your OpenShift deployment. You just take advantage of what you just heard because it already runs there. That's powerful, but the power of Open is really what's on the right. And this is your services. So we're pretty familiar with what a service broker is, but while having a service broker is useful, being able to getting access to services is tricky. So Red Hat and Google went, actually the process was a little different than this. The Open Service Broker, Open Service Broker API that allows now, in this case, I'll kind of break the news to you, every Google service that you would want, you get instantaneous access on-prem or if you're running on top of Google, to all these kind of services to your applications. Now, the use case we're seeing, of course, with customers, you guys will see this again with Coles tomorrow, they love the data services. They love our machine learning. They love the APIs. They love all the things that they want to have access to. They get seamless accesses. There's nothing that that company ever had to do because Google and Red Hat and SAP and IBM all work together in this Open Service Broker that services all of these anytime you want from anywhere you want. So now when you want to supercharge that application or you want to give your developer, hey developer, would you like to be able to put that in another language? Perfect. One API call to the best translate in the business. If you want to do that, if you want to look at and analyze speech text, perfect. One click API, you've got it, right? Same thing if you're running on the platform. I wouldn't suggest you run your data tier separate, don't run your SQL instance in Google and your application on-prem. Try and keep those together. But if you decide you want it on Google, you also get access without any work to some really very, very, very powerful tools like Spanner. So if you are a SQL user, for reference what Spanner is, Spanner is what happens when you take a no-SQL database that can go anywhere in the globe with SQL from the consistency standpoint. So you always had to take a hit in terms of a database. It was always a hit of I could be consistent, but I couldn't be geographically distributed. So Spanner is a global database. It is the one that sits underneath all of Google is a granddaddy of all databases. It allows you to write simple SQL and literally the same time commit within milliseconds on opposite sides of the globe, right? Don't have to manage it, don't have to do any work. Good, done, ready to go. And that's something you can surface and take advantage of your users on Google. They're running Google, they have free access to all of that right through the service broker. So you now fully feel like a first party service on-prem or with GCP or for that matter, if you're running another cloud and you like the services that we have, you're welcome to access those that are easy to access from anywhere. So this is really kind of powerfully, this is what we're seeing is a ton of work with customers around taking great applications and making that much better, right? Giving developers exactly what they want with no work. Now the other thing this also solves is very the RBAC controls, right? The role-based access controls, the consistency of who gets what, all of that is extremely important. All that's obviously baked in because of the work that we already did around the service broker. So I will pause one last thing. We'll spend some more time tomorrow. I'm almost at the end of time. If you wanna play with these APIs or you wanna do anything specifically, again we have a booth. We'd love to give you a, it's a Google HomeKit, it's a Raspberry Pi that you can win or not win. All you have to do is do a couple coding exercises. It's off and running. We'll spend a bunch of time tomorrow going through with you guys at the keynote. Anything you already have, so since everyone's a Red Hat customer, I'm just being fully transparent, so easy. If you're a Red Hat customer, it all runs on Google today. Best place to run your rel? You want per second billing on rel? On Google. You wanna bring your rel licensed to Google? Done. You want an Ansible? Best place to run Ansible. Same thing when you're through OpenShift. All that, they are easy, ready to go, been for a long time. Last piece, some place to do, like clearly come run this. Hey, if you just, anyone new to OpenShift? Let me say another way. Who has used OpenShift? Who has not used OpenShift? Do a not. Okay, so this might be a fun place for you. Test drive. This is gonna set up an entire environment. One click, all set up. You have an entire OpenShift environment in a test drive. It's all built for you. You're ready to go, build whatever you want. Take it through it. It's called a test drive. Requires no cost, nothing. Sign up, ready to go. Off and running. And all this stuff integrated in there. So another day one. OpenShift awesome. It takes some day one work to set it up. This means you can go straight to day two, get to play with it, off and running, no problem whatsoever. So I will stop and pause with the last couple seconds. Any questions anyone brought that they would want to answer? I will stay here. I can answer them publicly or you could ask me. I'll say questions going once, going twice. I will be right here. Thank you so much. Really appreciate you all the time.