 Awesome, thanks James and thanks Hunter and everyone really involved in setting up this Kubernetes user community. Being at Google obviously I'm biased but I think it's really fascinating that we have such a quickly growing community here in Singapore focused on Kubernetes because the geek inside me just, I think the power of Kubernetes is just really, really fascinating and it's awesome that all of you are showing up here on a Thursday night after work coming up to this part of the town and sitting down and really kind of take this in and take in the concept so it's awesome that you guys are able to give back, you know give examples in terms of your learnings, your passions on Kubernetes and really I think help showcase the potential of what's possible with Kubernetes. I think the one way I like to look at Kubernetes is, and not to get too meta, but it gets to the point where infrastructure becomes code, right? You can spin up compute, spin up memory, disk applications as code and I think through the power of Kubernetes and one reason I think Google is involved is through code you can really build a more adaptable infrastructure and I think it's just really, really fun. Obviously a really broad topic, quickly changing, quickly evolving but I think the community both here in Singapore and the community worldwide is really, really something to add. With that, let me do a couple slides here just to talk a little bit about Google's perspective and I can give a little bit of, oh it's black, okay, we'll give it one second here. Sometimes it's a little slow, power up, there we go, I can feel the hum, okay. So first off my name is Devin Mitchum, based here in Singapore I sit just about 100 yards that way. I've been in Singapore for about five years and I'm on the Google Cloud team at Singapore and what we do is I throw kind of a quick deck together here just to talk about Google Cloud but more specifically talk about Kubernetes and I know Hunter and James have covered a lot of this, but I put together a couple decks also just in appreciation that Kubernetes is turning to. First off I just kind of want to mention Google Cloud, so what is Google Cloud? Google we've been around for close to 18 years now and this number right here, $30 billion, this is how much money we've put into our data centers and more importantly put into our infrastructure. And now with Google Cloud we're actually releasing that infrastructure and opening up for developers to choose between if they want to consider going to the cloud. So when it comes to cloud and I know this was kind of covered a little bit but I think all of a sudden the room here know that hey there's different places you can put compute. I won't go too much into this but there's obviously physical co-location facilities, there's virtual clouds and then there's the full fully managed public cloud that's out there but for those maybe new to cloud these are the three different kind of concepts of where computing may or may not be right. A little bit about a Google. Someone here familiar with Google's network, a little bit, okay. Funny enough actually last about last month we just announced Google Cloud region here in Singapore. It's actually as a crow flies just towards Jirang. It's the most recent addition of our global cloud footprint. Every dot you see up here, every city, these are actual physical data center locations where we store and have Google Cloud capacity. So as you're developing your app if you're trying to reach customers you have these different choices here. All the blue lines in between that's actually fiber optics that Google has capacity on either solely has capacity or shares capacity for our dedicated network that you can use as you want to transfer loads between regions. So there's a lot of really cool architect you can do here. Just last week, any Australians in the crowd, a couple, okay. We just launched Sydney just two weeks ago as our most recent cloud region. So in terms of just Google scale, just cover this real quickly but we have a lot of locations where our network is located. We connect more or less to every ISP so if you're just thinking in terms of where to maybe place your compute capacity Google's pretty well covered here. Now let me jump in the Kubernetes. So this is a shot right here actually of our data centers. Anyone seen a photo of Google data centers before? A couple? Okay. Our data centers are pretty fascinating. I actually used to work in our data centers and help build these data centers. You can see it's typical raised floor. We have a lot of sheet metal, machines on the right, full of compute. Right? And it's pretty fascinating. We have a lot of networking and everything. But you can see that at this scale of compute it's not about the server, right? It's about a bigger picture of the server. It's about having CPU, having memory, having disk, and having it kind of more as a utility. I think a lot of these key concepts matter when it comes to Kubernetes, you know, when you're thinking about your design process. But you can almost think that a computer, it's no more about one piece of metal, right? It's more or less whatever you want it to be. Okay. So we talked about containers, just a quick blur of Google and containers. Back in the day, you had Linux, you had Unix, different variants of BSD, and you had this concept of jailed operating systems. And it was this idea of really containing an operating system so nothing could leak out. At Google, we did a lot of research into this back in 2004. In 2006, we helped work with the Linux Foundation on this concept called C Groups, which is an early version of what a container is. Just again, the idea that you have an operating system that really knows nothing outside its own walls, its own self-contained unit. 2013, we saw developments in Docker, and then Kubernetes launched 2014 within Google. I think this was covered before by Hunter, but I think the Kubernetes stories is really fascinating. Internal to Google, Kubernetes is actually called Borg. If you ever want to Google it, we have a white paper. It's publicly available, the Borg white paper. We used Kubernetes or Borg in Google to support all of our applications. If we launched Gmail or if we launched Web Search, it would all run as part of this virtual set of machines. That's what you're really tapping it into when you play with Kubernetes. Another quick bit on just Google Container Engine. There's a lot of big bullet points here, but we're running Container Engine in a number of regions. You can go online and check it out if you want to try. This also ties in pretty nicely into some of the other elements of Google Cloud, but you pretty much have most ... You can take advantage of most of the options we have when it comes to the cost of Kubernetes in terms of paying for compute. I guess some of the big highlights are really the boot times of these machines. If you do go with Google Cloud Kubernetes, it's around 30 seconds, give or take on boot times. I'll give you a little demo I can throw up later. Kubernetes adoption, I think this is interesting. This is what's happening in the industry, and Hunter had an awesome slide up there from Google Trends showing search volume interest in Kubernetes, just how many people are asking about it. We found roughly 2,000 projects based on Kubernetes. What's really cool about Kubernetes, it runs on public cloud, obviously, but can also run on-prem. You're on private cloud if you choose. Google or version of Kubernetes or public version is called Google Container Engine. We keep the K as a nod to Kubernetes, but you can see all the brands here that really use Kubernetes and find value in that. Just a quick one on Google Container Engine. Here's the general diagram of how Kubernetes more or less exists within Google, if you were to try it, it's your standard Docker build, you've got a Docker push, it'll launch Kubernetes cluster, and within that you can launch containers within your cluster there. We also have a Google Container Registry where you can store, and it's a public site, GCR.io, where you can download your own images, and that could be its own tech talk. Maybe just covering one other broader bit here, so Kubernetes is obviously one part of a much bigger story here, but at Google we feel when it comes to just other services there's a lot of other interesting uses of this kind of capacity. Once you build your Kubernetes cluster, if you need storage or if you need maybe key value pairs or maybe a document-oriented database or no SQL, there's a lot of interesting options out there within the Google cloud stack. With that, I just wanted to cover the high levels there. As a bonus, the demo gods play correctly. This is what Container Engine looks like if you're on Google. We actually have a website, it's called, it's actually not that website, but publicly just go to cloud.google.com, you can sign in and you can spin up your own containers. If I have a second, I could try a quick launch here. To launch a Kubernetes cluster on Google Cloud, you just click on create cluster, and then I come up with a cluster name, hello. I can choose my region, we're in Singapore, so I will go to Asia, Southeast-1, that's just down the road, and I can choose how many clusters do I want, how many machines, let's just do three nodes. Say I want to do auto-scaling, I can set that up, let's go to more, and I do auto-scaling, turn it on, it's beta, but why not? Minimal size, maybe five, okay, do three, and then create. So basically once I hit create, it just launches three virtual machines upon Google Cloud, and as it's creating, it's basically sending out a command to our infrastructure to allocate three virtual machines, set them up, install all the packages, get everything done, and once it's up, it'll look similar to what I have here with cluster one, and once you launch your Kubernetes cluster here, you hit the connect button, we give you some commands, you can copy and paste into this cool little built-in SSH client we have on the web, and from there you can then launch into your Kubernetes panel from then where you can start, and really learning. So yeah, that's all I had, any questions? Okay, I've got some, we've got some stickers in the back there, you guys did a great job eating all the pizza, proud of you, we have some stickers, replaced some of the boxes, and we even have some containers and stickers up front here, so we can do this later. Yeah. This is a question, when you hit that button to create a cluster, create a 3-vm construct, what's the hypervisor that is interacting with Google's hardware? Sure, so the question is around what hypervisor launches, kind of underpins launching those nodes, right? So in my understanding, so with hypervisors I typically associate that with VMs, right? This containers in Kubernetes is more, it's not really built on top of that classic sense, right? This resource itself is built to be more efficient so that you don't have the CPU overload of kind of our surrounding VM or infrastructure to support those pods. We could chat later, I mean if you have a more specific question, because that gets pretty detailed I think on the code, but the classic sense of hypervisors is more around VMs, right? And one of the efficiencies of containers is that you don't really have that, so you have the bare metal, you have a hypervisor on top of the bare metal, and then within that hypervisor you drop in your OSes, right, your pods or your containers. With this you get an advantage of a much more optimized hypervisor layer in a way. Yeah, but this may sound stupid, but I wasn't managing a sentence for you, actually, I thought that you were closer, would that be possible? Like I run Kubernetes, I guess, for both the post-hub, right here, still maybe oversize but I have to worry about the VMs and all this stuff, and then I need to do containers. And I imagine when I hear the Google Container thing as well, I just worry about the containers. Yeah, so your question was more around the actual capacity. Now, if that could happen, Google could do it. And then just run QCL and update containers, I don't have to worry about it. Because in this scenario I'm still getting three built-in machines, right? Yeah. In theory it could do the same just going by Google computing and provisioning three machines, starting Kubernetes, doing all this stuff. But then you have to install Kube? Yeah, of course, install all that extra work, right? Yeah, but here it's managed. Yeah, here what I should do is throw a web page, but the cool thing is with all of our computing pages at the very bottom, right next to create this command that says, quote, the rest of the command line, you can actually click on this, it will pop up a window with the exact gcloud command you can run from the command line to perform the same action build on that page. So from there you can copy this and then maybe modify it. Let me put it another way. In this scenario I pay for the built-in machine. Correct. To read it as in a scenario where I pay for the bought. Pay for the bought. Yeah, you're managing the build-in machine and I manage the whole thing at the end of the course. Yeah, the building here is based more on the VMs, but in terms of the building it's not as, yeah, it's more based on the VMs that you use as part of your cluster. Yeah. Last question. Can I just, the last one, very, very short one, just to specify what we're running, the three VMs or three bullets, right? And usually we're going to see if you want to make it high-available at least three masters. So what we're talking about just to confirm, is it three basic workers? Is it correct or understanding? Sure. So in three, what does three mean? Is it three masters? Is it three workers? Great question. So as we create, in terms of size, the size here, the number of nodes in your cluster, cluster size is limited by the available computing quota, which may depend on chosen zone. So I could dig more into that. I'd imagine it's one master plus two workers. But if I look here, because this, the dashboard runs on the master. So Devin, when I've done this, basically the master is a managed service. You actually don't see it. Google bring it up. They patch it, they maintain it. The three that Devin is spinning up, these are worker nodes. They don't try to, they try to do it on worker nodes. You don't see the management server at all. It's a managed service, as far as you can see. They'll patch it, they'll maintain it, they'll make it highly available. You only worry about your workloads, which goes on worker nodes. So to your question... So they'll actually answer the file of that. So your question basically is when you're going to run a cluster, you always need worker nodes. So he's giving an initial three count, but you gave this beta autoscale. And that's to add worker nodes in case your workload starts bursting. So they'll monitor it and they'll actually start adding nodes to autoscale. But that's beta at the moment for the autoscale. Yeah, exactly. So to repeat this back, we see here we launched three nodes. Each of those nodes is a worker node. This is worker node number one, number two, number three. There's a master node that orchestrates all this. That's behind the scenes. You don't see that here because it's an orchestrator. So I have a question for you. Talk more about the security between these nodes. How are they secured as an input to the traffic? We may have to ask because we're running out of time. We've still got one more presentation. Just wondering if we can take that and handle it afterwards. Sure. And any other... I'm sorry. No problem. Mark, could you please come up?