 So for the user group to succeed, I have, before we start the talk, because I don't stay in Singapore, I'm going to be looking for people who pick this up and are super interested to take over the user group. So currently Vincent is running it with me as a co-founder, but we're going to be looking at A4 candidates. So if you think that you are interested and passionate about it, please let us know. The second thing is we need speakers. We need people who are working with Kubernetes, so I'm looking at all the guys that have done it before. So honestly, Devin, I don't know if Google want to do anything, but if you're doing anything. So because this is a beginner type thing, it's completely fine to bring something that is maybe not in production, but something that you're trying to figure out and then just come and talk about it, because this is a learning experience about what were the use cases, did you solve it in that particular case, which platform did you run it on and how did you solve it. Okay, so basically if you think you've got a topic, will you please let me know. We need sponsors to host these talks. So if your company is at all interested in hosting it, like HonestBe just did, if you've got a venue and you've got an interest in give back to the community or you're in DevOps, can you also let me know. Speakers, sponsors, talks, anything else like that. You can reach out directly to me on the Meetup page. You can just contact me and let me know if you're interested, if you can contribute. Okay, so that was logistics. It's what I want from the community. Do you guys have any questions before we start? Because basically what I'm going to do is I'm going to go through the cloud native landscape and try and break apart the pieces so that you can understand why I got to Kubernetes at this point and then we'll go through the hands on. So we're all good to go. So who's heard of OpenStack? This is a lot more than I thought. Okay, so 2013, OpenStack was going to solve all our problems. You met me in 2013. This would be an OpenStack user group. It was going to solve all the problems that were out there. So OpenStack didn't solve, but he tried to do what Amazon did at scale, but privately. So if you know what OpenStack is, you're coming from an ops background. So around about the same time 2013 Docker came out, and of course the first time somebody explained it to me I said that it's just a VM because I didn't have enough context to understand that it's actually just some namespaces that the kernel is isolating. And only once I understood what containers were did I realize that the world was about to change. So the people are running VMs today and I'm not knocking VMs. I'm just saying that there's a quicker way of doing things through the containers. So if you go through the evolution of I run my thing in a VM, then I learn about this great thing called a container and it's much quicker to do things. So if you're at that stage where I was running Docker, and it's cool to run Docker on your laptop, but once you want to start running many of them together, you have to start solving the orchestration problem, running them at scale. Okay, it's the evolution. I'm running my app on a VM. I move it into a container and now I want to run lots of it everywhere. So about two years ago, I went out and as a technologist I had to look through the stack and I had to basically go where. I've only got a finite amount of time that I can spend on technology. So where was I going to spend it? And at the time it's mesos, swarm and Kubernetes. The only thing I knew about Kubernetes is that it's a really weird word. So does anybody know the origin of the word Kubernetes? Is it the navigator or the Greek name? It's the Greek name. It's the Greek name for the helmsman. So you can actually now see that. Now once I tell you that it's the helmsman for the ship, you can actually see that the logo is actually the helm of a ship. So you can actually see. So keeping in the nautical theme, what Kubernetes wants to do, if containers and whales are basically shipping containers for code, Kubernetes wants to be the controller of the ship that is steering the containers. So that's what it wants to be. So to break apart the puzzle, the Cloud Native Foundation have got this GitHub page where they've got these diagrams up. And the first problem is if you're new to Cloud Native, there's a lot of stuff going on on this page. There's a lot of players, a lot of projects, a lot of stuff. If you're new to this world, just start at the bottom. It's infrastructure. Everybody knows about Amazon as your Google, all the infrastructure providers. The provisioning layer is this interesting, I'm going to call it infrastructure as code, this Ansible Chef part over here in security. The runtime is where Docker made things really popular. So they're at that runtime engine over there on the left. But is everybody aware that Docker is not the only way to run containers? This is an important understanding that Docker made it super sexy and solved a lot of problems, but basically they took a tar file, they figured out how to take a tar file, move it somewhere else, and then run it under the kernel's privileges so it could be isolated. There are other players in town. Don't be completely enamored with just the one technology you'll lock yourself in. There's good things about Docker, there's some negative things about Docker, but there are other players in the space, specifically Rocket from CoreOS. He's another way. Yeah, but I'm talking about the Rocket. Where did they go? Well, there's CoreOS and there's Rocket for the runtime. So let's assume now we've got our containers. So the next thing, as I said, is I had to figure out how I was going to run this thing at scale. So you run it on your laptop, you've got your engine X container spun up. It's all sexy. You've done the nightmare of netting with the port forwarding. And then after that, you want to figure out how to run the containers at scale. So now we're basically at this layer over here. And these are the options for you at this layer. It's a three-way race at this particular point in time. It's between Kub, Swarm and Messos. The others are there, but they're not serious. Okay, this is a personal opinion. You are entitled to another opinion completely. But this is a Kubernetes user group. So the whole reason we're here is because we're talking about the orchestration of containers over here. Okay. Cloud Native Foundation, which now if these are all the players, the Cloud Native Foundation or landscape has adopted these projects from different companies or projects. So these are incubator projects within the Cloud Native Foundation. Okay. Kubernetes was the first that was there. And it was shortly after that followed by HCD, which is a distributed key value store from CoreOS. And it's a key component for keeping state within Kubernetes. So I tend to follow the projects that join the CNCF quite closely because they tend to solve one problem, interesting problem in distributed systems. To quickly call out a couple, LinkAD is an interesting one. GRPC is a protocol for microservices. Rocket is an alternate runtime. To Docker, where's Fluent? Fluent over here is a logging solution, Prometheus. This is a logs solution. Prometheus is a monitoring solution. And there's a couple of other on this. So we're here specifically to learn about Kubernetes, but these other projects they connect in and solve problems in the distributed land space where you're going to be running applications, running in containers. Okay. So the outcome of tonight's session is I can't always be around. So what I'd like to do is instead of catching the fish and giving you the fish and you eat the fish and you go home and you never learn how to fish, I'm going to teach you how to fish by having you build your own cluster. So we're going to build a Kubernetes cluster on digital ocean and long after this talk is finished and on the weekend you're going to come back to it. And I want you to basically to have your own Kubernetes cluster with the microservices application deployed on top of it so that you can pull it apart because the best way to learn is to actually have something running that you can pull apart. And because I've given you the instructions, you can tear it down and rebuild it. So if you break anything, don't worry. Just go back to the beginning, tear down the droplets and start again. When I started with Kubernetes, it was this much time to get the cluster up and this much time working with the application. It was an incredible pain to get a Kubernetes cluster up. It's actually gotten so good now that it's basically about this much time to get the cluster up and I can spend all this time working on the application on this side. So what we're going to do tonight is this front end part where we're going to basically stand up the cluster and get it going. And this is what we're going to do on digital ocean after this. So I sometimes liken coming into one of these new projects as coming into a new country where you need a guide that has to teach you the words that this particular country uses. So Kubernetes has got its own set of words to describe things which you will know as other things. So you may know them as containers. Inside Kubernetes, we call them pods. They're slightly different, but all I'm trying to do is I'm trying to create in the one world we call them this, but inside this particular country and you're all now coming in, you've got visas to enter Kubernetes and I'm going to teach you the terms and words for Kubernetes. To do that, I'm going to tell you a children's story. So this is a website, a link from Deus who in case there were some people here that missed it earlier, Deus was acquired by Microsoft earlier this week, but they do some stunning stuff in the Kubernetes space so it's quite important. And what this illustrated guide, and I'll put this link into the meetup notes and basically you can go through this. If you don't understand the words to describe the primitives inside Kubernetes, come to this website. And what they do here is they basically try to take you through the Kubernetes primitives to make it easy for you to understand what Kube is. You've seen this slide before? It didn't. I don't actually read it. What? Okay. I drew on Microsoft Surface. You asked an earlier question, so I'm going to give you a sticker. I can't remember what it was. You can put that on the Apple logo if you want. Okay, so I'll give you the reason why. Has anyone else got a Surface? Okay. The top gets super, super hot. So when I put stickers on the top, they actually start bubbling. So they actually, it gets too hot. So the stickers basically just slide off. So if you've got a... Let's put the cooling in the top so the stickers that cover our logo slide off. Okay, so no stickers for him. He's giving me a hard time. Nothing for you. So this guy over here told me I'm not supposed to stand up, but I hate sitting down. So you'll notice there's a mic here because I want to go to the white board and use the white board. Oh, you're right. You gave me the definition of Kubernetes, right? So there it is. Sorry? It's in the Wikipedia. Yes, it is. Okay. I'm going to go through this quickly. I'll give you the link and we'll get on to building the cluster. And then I'll be... So I'll be talking about primitives that we use in Kubernetes. Now, Kubernetes is very much an ops-focused tool. People who do ops like Kubernetes because it gives... So who's using Ansible here? Infrastructure as code. One, two. Okay. Infrastructure as code. Basically, Ansible allows you to use these files to describe how you want the application to be defined. Okay, so in code, you say I want it to look like X. Now, this is the first way I'll describe Kubernetes. Kubernetes is infrastructure as code for your containers, for your cloud-native apps. Okay, it is a way of using a manifest file that describes exactly how you want the app to run, but on a cloud-native infrastructure on top of Docker. Okay? So if you're familiar with the whole infrastructure as code Ansible approach to things, think of Kubernetes. And I'm going to use three ways to describe Kubernetes. So this first one is it is infrastructure as code for your cloud-native applications. The second thing is it's lifecycle management of your containers. Right? So your businesses, if you're students or your businesses, the secret source for your companies to make money is inside the containers. The developers have gone and written something and it's sitting inside a container. Okay? To make money, that has to be put onto a platform so it can run and make money for the company. Okay? Kubernetes is that platform. It is the lifecycle that wraps that container with, you get it out there, I'm going to expose it as a service. People are going to interact with it and at some point it's going to die. Okay? I'm going to replace it with another container that's got a patch or an updated feature to it. Okay? So infrastructure as code for your cloud-native apps, but also lifecycle management around your containers. Okay? Sorry, I'm going to get up and I'm going to use the whiteboard because I can't do this. Okay? I'm going to quickly do the primitives, the most important primitives for Kubernetes. You've gone and written a container that contains something that you've written and I'm only, so I'm not talking about stateful applications now, I'm talking about stateless, okay? You can launch into a whole discussion about stateful and stateless, but the evening is not long enough and I haven't got that much energy left in me, so we're going just stateless at the moment. So you've written a container and it is a thing of beauty and it makes money for you or your company and you've checked it into a registry. Kubernetes wraps this with primitives to make it manageable. Okay? So this can be whatever it is. The first thing that you've got to know is that it wraps it in this concept of a pod and what a pod basically is, it describes where did I get the container and how do I want to run it? How do I want to run this particular container? Characteristics about how I want to run. So you start to understand the manifest file that describes how I want this to run. So the pod defines how I want to run the container and things inside there like health checks and Vincent helped me out if I skip out other things out there. There's things like health checks. There's things like ports that have to be open for it to run. There's storage that it needs to be attached to. So everything, compute network and storage that you need, like if you remember the VM world, the basics of network computing storage, the pod definition holds the basics for the container. Okay? So the ports that need to be open, the storage that needs to be attached, things where it starts to get different is health checks. So the developer who wrote this knows how to check the health of it. He has to work with a Kubernetes operator that tells him how do I check the health, how do I know that this is alive or not? Because I told you Kubernetes is going to do life cycle management and to do life cycle management I must know if it is healthy or sick and it needs to be replaced. So there are going to be health checks that it needs to know about the container as well. Okay? And that is the pod definition of it. The next thing I need to know about this is how many copies do I need to run? Okay? These are replication controllers. So you can have one copy running or you can have multiple copies running because you may want it to be highly available behind some sort of load balancer. Okay? So the pod definition is which container am I going to run and life cycle around it. The replication control is how many copies I want to run of it. Okay? So I've used words like replication control. I've used words like pod. The next one is a service and quite frankly it's the worst name ever but once you get your head around what a service is actually trying to do it becomes extremely beautiful and elegant how Kubernetes solves this problem. A service is a stable vip front end for your container. Okay? The first thing I had to get my head around is that these guys are going to live and die. They can live for minutes. They can live for weeks. They can live for months. But they're going to die and they're going to be replaced by the next version. The people interacting with your app don't... If the IP address changes every time this happens and you spin up a new one the people are going to have to redirect to find it. Kubernetes has this concept of a service which basically means it's a stable anchor point. When you come into the app you hit the service. The service will redirect you to the correct container, the correct pod. Okay? This is so important. It's the basis of microservices. Clients who want to use your app only know about the service and this is a stable IP. Okay? Once it's defined it stays there. Right? So any time they want to use it but whether they're talking to version one, version two, version three so now you start to understand how Kubernetes does rolling updates. Right? Because users are hitting the service and they're being redirected. So I'm working on version one. There's a patch. So I decommission one. I spin up two. They still hit the service but they're going to version two of it. Okay? So service is one of those super, super important primitives that Kubernetes gives you. Okay? So we've spoken about pods. We've spoken about replication controls. We've spoken about services. Okay? And that's all that you need to know for the rest of the workshop that we're going to do. One minor addition to that is just that within a pod there can be more than one container. Yeah. So that can be used. It's not used for high availability but you'd be doing with replication controller but it is utilized for support services. So pretty much where Hunter has basically gotten us into is like best practices for pod. Do you run one function within a pod and have it run cleanly? So now we're still talking about microservices, right? Because now we're basically saying that we should do one thing and one thing well. We shouldn't like clutter it with too many, too many functions. So you have to decide for yourselves. Do you want to put more than one? And sometimes it'll be a use case where you want to run more than one container that are somehow related. There can be like a cache lookup and then a web server containers. And you can put them together. So this is a pull and then it does a publish. The only other one that I've seen is if you put the main function and then our friends in Linkerdee, they call this a side card. And the side card does some sort of complimentary function to the main. So Linkerdee basically puts itself in here and basically does a service mesh. So the pods can have more than one exactly as Hunter said but you've got to figure out like which one works for you. I'm currently in the microservices where basically one function, whatever side cards you need to support the main function and let it do it because this means that the guys who look after this, they decouple from the other guys. So now we've got microservices. So I can upgrade this one to whatever version I want without impacting all the other guys around me. And now we've just explained microservices. So loosely coupled but each team can chase whichever function that they're doing. But there are cases where you put more. Can you talk about? Sorry, I didn't know if you covered the fact that there are the unit of scheduling. So if you want to scale, you cannot. I mean, if you put multiple containers within a pod, you cannot scale them individually. A pod is the unit that you scale. So if you have a certain application and you want to scale out that application, I mean, you can only do it if it's at the unit level of a pod. Okay. That's another. Oh, and that also brings me to the other super important. So I said that the replication controller is the number. So this is how Kubernetes does scaling, right? So I want the replication controller dictates how many copies you want and then it starts spinning them up across for you across. So that's how it does scaling out. So we forgot the very important aspect of introducing the master that actually runs the control that looks after all of that. And it's only single master at the moment. You can run it as HA? Yeah, but I mean, there is an election that happens when there is one. Yes. What is called like a deployment? Because when I was reading through 180. Yeah, I didn't want to confuse them with that this early on. Okay. Okay. What you've got to get your head around is in the manifest file. So once you figure out all these primitives and you write the manifest file, what happens is I have to get my head around this is they actually they include. So the pod definition is there. Then I think it's the deployment or the replication controller and then the deployment. So as long as you know that they are inside of each one. Oh, and this is YAML. Who loves YAML? Hands up. Okay. Indentation, white space for the win. Okay. The cool thing is, is they coming out with a lint for Kubernetes, that you can take the manifest file, put it in, it'll tell you where your indentation is roughly wrong. These primitives are indented at different spaces within the manifest file. Yeah, let's not go there right now because what I'm trying to do is, I'm trying to keep it really simple so that tomorrow it'll make sense so that when you get to the deepest stuff, you can just start absorbing it. Good question. Is it any one of the primitives makes sense to the number of the containers you want to run? For example, if I have one container, right, I want to run. Yes. It doesn't make sense to run the entire orchestration for a container. I have. I have. Okay. No, no, no. The question, what is the number way it makes sense? No, no, no. There's obviously each, sorry, just one single container. It's just one example. No. Is it makes sense to run the Kubernetes in the two machines? Free. What is the limit within, start making sense really to have orchestration engine? I'm going to answer that in two ways. There is a number of engineers ratio to work, right? And if you, so here is my third way of explaining to what Kubernetes is. The more I worked with it, the more I actually realized it's an operating system for your data center. And that, that you guys are going to look at me insane. But if you think about it, you put Kubernetes onto your systems. Okay. I installed Kubernetes and it's done exactly like this is a master controller. And then all these nodes in your data center become worker nodes as an admin. You interact only with the master and you actually don't care where things are running and you just put the workload down and Kubernetes will figure out where to do it. This is very much like an operating system today where you work with a single machine and the operating system on here controls the memory, the disk, the networking. And I'm interacting with it and it is figuring out where to go. Think of Kubernetes very much as an operating system for your data centers. Once you install this, it creates a level of abstraction between yourself and the systems that you're going to manage and force basically now force multiplication comes into effect because you have lots of nodes in your data center. Now, when you do Kubernetes and you drop the workload on it and it goes out, you actually won't be SSHing into boxes anymore because you don't care about what's going on in the box underneath. You only care about the app. So it basically is scale. There's a certain scale where Kubernetes becomes relevant to start using. And quite frankly, it's manageable. For me, it makes container. Because it's not infrastructure as code, I get the benefit of knowing that when I move it between clusters and because I've defined it in a manifest file, it's going to look exactly the same between Dev, Test, UAT and Prod. So for me, it's not a when is the size become applicable? It becomes when is my team ready to start? When are they becoming serious about running cloud native apps? When do I have a limited number of engineers, but I've got multiple environments and I want to have the exact same manifest while run between them? So I don't know if that was a good answer, but I'm going to go with that for now. So unless anyone else has got anything else. The question is easy to case scenario. I run right to the service for the entire enterprise where everybody is using the Kubernetes as a service. And I can deploy this in one specific case. So per project, if your project implies 10 containers as microservices for you, maybe it will make sense. But if it's your project requires only three containers to run on three separate machines, is it overhead to running? I can put the same as service as the code in Ansible and deploy a single container. So it's going to be the same as the code. Where is the Kubernetes game in place? Does it make sense to implement the Kubernetes cluster only for free pods? No, I wouldn't say that. Exactly. That's probably something you need to define as a team or as an individual. Because I think everyone has different points where the amount of effort to manage Ansible, or shared for the public or anything else, and that handle payload for everybody else over multiple nodes becomes less work than deploying Kubernetes and getting it running. And in some cases running it on Google Cloud or on Azura where they manage all of the infrastructure for you becomes much easier. There's much less overhead to set up for GKE clusters. On the flip side as well, it may be a smaller environment but it makes sense to run nodes on time and swap them right to the other. So I'll also say that I've hinted at the fact now that because it's manifest file driven and the cluster is running, this means that you can take your manifest file, you can develop it in-house, but then move it to anywhere else that there's a cluster. So who are all the providers who provide these clusters? Google. Azura, Amazon? No. You have to build your own cluster. So basically I'm slamming Amazon, but basically some of the cloud providers offer a Kubernetes API, a native API. So once you have your manifest file, you actually just sign up for the Kubernetes service and you handle the manifest file and it will spin it up. So the native supporters are Azura, believe it or not, IBM Bluemix and Google. On the Amazon sites, like Amazon Summit, Amazon recommends CoreOS tectonic as the official way of running Kubernetes. They actually have this as like, they have a comparison. Okay, but you saw that they recently had another Kubernetes guy join them recently. Yeah. It's not really a Kubernetes guy, it's more a Docker swarm. Okay. That's for a reason. That's for a reason why the HDs are there as a standard in Azure drive format, but there's always a competition just beyond the source technology. There's always a vendor backing. So what tells me that Kubernetes is going to be mainstream is that you've got these main providers that are supporting it as an API, but they're supporting Docker swarm at the same time as well. Google just doesn't do many bad mistakes, apart from that Google Wave and human mashups in 2004. They actually have three significant great things. Yeah. Big table and metrics and whatever. So actually, a number of the, believe it or not, these cloud native projects that make it into CNCF have their origins in Google. So GRPC is stubby inside Google. Kubernetes is Borg slash Amiga. Prometheus, the monitoring was Borg Mon, engineers who went to SoundCloud and then put it out. Carry on. Carry on doing the good stuff. Okay. So we've spoken about the primitives that we wrap your app in to make life cycle management. So this is the infrastructure and that board is in the way. And Kubernetes, and this is what we'll be building in DigitalOcean, is we'll be doing one master and two worker nodes. Okay. So I really explained to you what a service is, right? The service is a stable anchor point that you enter into the containers that are running inside a pod. So exactly as Hunter said, you can have more than one container running in a pod. Microservices approach to programming says that you have one, do it well, and have some side calls with it. This is exactly what we're going to be building. And all that you have to know is that there is a concept of a master and these worker nodes. When you sign up with Google or Azure, you actually are out of the infrastructure game. You actually don't even know about this stuff. There's actually just an API running on the master. And you're just giving the manifest file to the API. And it is handling this for you. The only thing that the cloud providers want to know from you is how many of these nodes do you want? And do you want to do scaling out or not? So if you want to get out of the infrastructure game, if you want to get higher up, you want to get to the point where you're actually just taking your manifest that has your secret source and you're giving it to the operating system controller and it's putting it out. Now some of you will be running this in-house. There's most clients are thinking of running something in-house and then splitting their workloads between other cloud providers. And that's where they're making the choice about is their native API point for us to connect to in this. Okay. So I've spoken a lot now. So I'm about to kill you guys with CLI after this. We're going to just be doing cut and paste, cut and paste, cut and paste, cut and paste on digital ocean after this. So if you guys want to sleep, that's fine. It's all in the gist that you can follow. So what I'm going to be doing is a lot of this stuff can actually be run at the same time. You can cut and paste these commands, but I've actually separated them out because I want to basically allow everybody time to follow to select the command and then paste it into it. Okay. So those that have got digital ocean accounts and SSH keys ready to load, we're going to do the workshop part of it now. Do we need a short break, stretch legs, air? Do you have free coffee? I don't know about coffee, but soft drinks, sugar, yes. Do we want to do that now and then do the workshop part? I think there was a lot to take in. What do you want to do, like 10 minutes? Yes, should we just take 10 minutes for everybody just to stretch legs, maybe get some caffeine or sugar and then we'll come back and we'll hit it. We can get some coffee, like pre-mix if you want coffee, if you really need coffee. So we'll be back at basically 7.30. Right. I'll get one.