 Hi everyone. My name is Amy. I recently joined Heptio on Friday, so I believe today is my fifth day of work. So that's pretty awesome. And prior to this, I actually started in the Kubernetes Container space in March of this year, and I graduated college in January of this year. So it's been a crazy, crazy whirlwind, y'all. And so this talk is sort of like my path and like my journey of understanding Kubernetes and like containers in this whole crazy world in which Helm was a major component, which I'll talk about at the end. I'll note that there's like some like Google-able keywords on like the bottom left of the slides. I'll post them up on my Twitter at some point. But it's like great if you want to like delve further into each topic. So again, this is geared toward beginners. But once again, if you're interested in learning how to teach others about Kubernetes, this is also a great talk. So in infrastructure, your main goal is to always prepare for failure, right? There is no trust in the sense of like, I don't really want to trust my app developers and I don't really need to trust my app developers. And everything in the system and needs to be able to fail, right? So the question is, is like, how do we do this? So Kubernetes is one of those great tools that allows this. But at the base of it all is containers. And the question is that a lot of web app developers like start with is how do you run your application in the cloud? So the core concept of the cloud, right, is that everything is ephemeral. Everything dies. Things are complicated. Networks goes down. Like it's the whole just crazy field distributed systems where there's like tons and tons of papers, right? So what we want to be able to do is build trust and services that we always want to have available, right? Like if you have a web app, for instance, which is the most like tangible thing that I usually use as an example, is you always want to be able to access websites with low latency and you just want it to work, right? As application developers, like usually they don't really need to think about how the deployment process works, right? So this will sort of like guide you through all of that from a web app onto Kubernetes. So there's lots of issues that we need to think of as infrastructure engineers or DevOps people, infrastructure folks. We need to expect things like high and expected traffic, application failures, availability, efficient resource utilization, for instance, if you work at a massive company like Google or Microsoft or Amazon, right? Like if you save even 1% of resource utilization, that's like a bajillion dollars. Like it's a lot of money. Bajillion is a very accurate term. So usually how I start off when I have an audience of mainly web application developers is first you need to like start with a base building block, right? So what is a container? A lot, a flaw that I find when people teach this is like they start off with, oh, like a container is a Linux thread and that's like, I feel like the worst way to approach that, like yes it is, but you don't need to tell people that at the beginning, right? Like what is a Linux thread? Like can people actually like really truly explain that correctly when they're first beginning? So what I like to start off with is a container is a baby computer inside another computer where the baby computer is a container and the other computer is a server, right? So why would you want a baby computer inside another computer? The idea is that the baby computer is easy to transfer around, there's resource isolation, and it also is able to encompass your application environment which I'll be able to talk to you about next. So a common conceptual comparison is a virtual machine, but it is very different in the sense of containers are much more efficient, but there's awesome papers and things like that to be able to describe the differences between the two. And ultimately containers are a way to abstract away your infrastructure. So here is kind of like my pseudo code ish example that is another way to lead into how to get web application developers like deploying their applications onto a container. So what we have here is like, let's focus on the image on the left is we have our server and then we have our container and we have a bunch of configurations that we have on our Docker file, right? So on our container, what I want to always consistently deploy our web application on is Alpine and I want it to be on Python version 3.7 and I have a lot of set of scripts and things like that and I want my web application to consistently be in the same environment, right? The idea is that locally you have a bunch of configurations and things like that and shit just goes wrong when you put it into production if your environment is not consistent. So containers provide the consistency for you. So on the right here is where we have our configuration file. You'll see that we have the Python Alpine image. In the run line, there's a bunch of set up scripts. What we do is we copy the local application onto our container and it builds that into our image and what we're doing is then exposing port 80. After that, what we have is the container, but that's not enough, right? So it's complicated. Our container is going through an existential crisis, poor little container. It's asking questions like where should I live? So this goes under the realm of scheduling, right? How do I talk to other containers? So this goes under the realm of networking. How do I talk to the world? So this goes under the realm of routing and what happens if I get sick? So let's talk about failure recovery and there's a whole bunch of topics that this can expand into, right? And no one really wants to manage this on their own. So what happens? Hey, we're at KubeCon. So Kubernetes, of course. Who would have thought? So Kubernetes obviously is a container management platform. That's why we're all here, right? And the way I like to describe it is that we have a bunch of super fancy abstractions to organize baby containers, right? And this is sort of the answer to all the previous questions that we had before. So this is sort of the basics of everything. I'll try to run through them super quickly so that we can go to the helm demo since this is a Kubernetes conference. So hopefully, or not hopefully, but maybe most of you all probably know about this. So some containers are tightly coupled together, so therefore we need to be able to have an abstraction to be able to schedule them on the same server together, right? They want to be co-located. So we need more ways to organize containers and that's where pods come in. So pods are essentially scheduling unit. They're the basic unit of Kubernetes. They contain one or more containers in a pod. Each pod is given an IP address within the cluster to be reached. Note that you cannot reach this IP address outside of your cluster. Pods, they can be confusing, but what you want to be able to know is that you rarely ever have more than one container with a pod due to scheduling issues. So if something happens, you can't really control which container gets killed. And so that's like the main reason why you don't want to have more than one container within a pod. So next, what we have are deployments. So what a deployment is, is a group of pods. And here is where we have the concept of actual state and desired state, right? So deployments, what they do is they help your pods reach your desired state. So for instance, what you could have is, you can specify a number of pods running in your cluster. So let's, for instance, say we want to have three of the same pods, we want to have three of the same pods running in our cluster always, right? So that's what we call replicas. Let's say one dies or you kill it, the deployment controller will make sure that there's another pod spun up to make sure that you always have three pods. So that's the idea of the desired state. No matter what's happening, your deployment controller will make sure that we will always work towards to get to the state that the deployment is defined as. So again, let's say you have three replicas, if you kill one, deployment controller will make sure it'll spin up another one, another pod. Now we have another organizational concept called services. So what services refer to are a group of pods or deployments. It has nothing to do with desired state. So let's pose this problem. So recall that each pod gets an IP. So when they die or get killed, their IP can change and it's not a reliable IP to always hit. So let's say you have a group of pods, for instance, front end, another group of pods called back end. They need to talk to each other and be able to keep track of each other. And you don't want to do this based on relying upon any sort of single IP address, right? Because again, they're not dependable. So what services do is that they define a group of pods and a dependable endpoint to be able to communicate to one another. So for instance, you could have your front end service, hit up your back end service, and they can transfer information back and forth between these groups of pods that work together or pods or deployments. And the last sort of concept is how we sort of like route traffic from the world onto our cluster. So this is called an ingress controller, and this is how traffic outside the cluster is routed inside the cluster. So what it does is it routes traffic to internal services based on things like your host and your path. In my demo, I'll actually be binding or I'll be routing it to a port and I won't have an endpoint, but I'll have another diagram to show you that. So let's do a summary and run through and then next we'll do demoing time. So here what we have is the internet and what we want to try to reach is our ingress controller. So we'll hit an IP or domain or whatever. And then what will happen is let's say we hit up endpoint foo, then it'll hit up service A and then be able to serve back whatever service A wants to give back to you. Same for like endpoint bar or baz, it'll hit up service B and C respectively. Within our services, again, we have what are called deployments and pods. And those are in charge of, so deployments are what is in charge of making sure that you're making sure that there's always a desired state to be achieved. So the next thing that we can talk about is how do we version control a deployment? So what we have is great. So what we want to be able to make sure is how do we keep track of each deployment that we have? So let's say we have a specific deployment with let's say nginx 1.12. And then the other thing that we want to add is a new ingress endpoint. So we want to be able to couple these sort of resources and changes together. And so this is where Helm comes in. And this is like actually the tool that I use to deploy all my deployments when I first started because I thought that existing tools were a bit confusing. So this can help you kind of like manage each of your deployments and also the cluster state. So Helm, yay. So this is a like kind of do a cursory overview of this. So this is what is called a chart. And so a chart basically defines a group of manifest files. And so that is like sort of the state of each deployment that you have. And you can define like a lot of metadata there. And it's super awesome. And this is how we structure chart. This is like information that you can look on the documentation. But we have chart and we should have, sorry, that should say templates. And then we have all of our templates under there with all of our manifest files. And then all I do is I can do Helm install. And then I can show the different deployments that I have out with Helm list. We can do things like rollbacks. We can do linting and all of that that I will show you in a second. So this is our demo preview. I wanted to sort of show you a picture of what's going on before we actually look at any YAML code because I find that like pictures for me I need to be able to like I'm a very visual person. I need to be able to see what's going on. So what our demo will, what I'll do in my demo today is I have a mini cube set up to set up my cluster. What we're going to do is we're going to be able to hit up that endpoint. I think it starts with 192. And that's the essentially the endpoint or sorry the IP address where the like internet is coming in, right? And then what that will hit is our ingress controller. On our ingress controller, what I want to be able to do is expose port 443. And our ingress controller will be called nginx ingress. That will then route traffic to our nginx service named nginx service. Pretty imaginative name. And then 443 will route traffic to the exposed port on the service, port 80. And then what we'll do is I'll explain the selector in a second. So inside our service, what we have is a deployment of three pods right now. And our container, we're exposing port 80. And for pods, what we have is called a label. So that's sort of like the squiggly line thing. Our label is called app colon nginx. And so essentially what happens with the service is it's using the selector to select everything with the label app nginx to route traffic to. And so that's sort of the relationship between the selector and label in which the selector selects things, and the label label things. And when you have the selector, it'll route traffic to things that the label coordinates to. Pretty simple, right? And so that is the image I want to keep in your mind as we go through the manifest files. So something that I felt was, oh, sorry. So something that I felt was interesting was that like every time I deployed a deployment, I just used Helm install. And the reason this was the case was because Cue control commands were kind of confusing to me when I was first starting. I just wanted to hit something, have it work, not have to worry about the order in which I deploy my resources. And so that was sort of the case there. Let me go ahead and mirror my... I need to go ahead and mirror my screen right now. So one second so we can go into the demo mode. Okay, mirror displays. Cool. I don't know how to exit this part out. I'll just kind of hide it there. Okay, whatever. Okay, let's like take a second and hope the demo gods are with me today. So I'm going to do an LS. So we have part one, which I'll just do like a quick Helm run through of all my manifest files. So let's go into there. Part one. Oh, yeah, sure. Absolutely. So here is sort of the layout of what's going on, right? So at the very top under our part one Helm directory, we have our chart. So let's go ahead and cat that. Amy's awesome demo description, version control your clusters. So this is sort of the metadata to describe each deployment, right? And so we're going to ignore the values at YAML for now because I'll do that in part two. So let's go ahead and see new templates. And this is where all my manifest files live. It's clear here. Oops. And so here what we can see is we remember we have deployments, we have our ingress controllers and we have our service. So let's go ahead and cat out deployment and show you that is exactly as the picture that I had before. So let's like do a quick breakdown of this. So what we can see here is we have our deployment. So that's what is our resource. It is called nginx. And then what we want to be able to do is have three replicas. So that means we want to have three pods up and running at all times. And then we're labeling this deployment app nginx. And what we have is our container with the image of nginx 1.13. And we're exposing port 80 on our nginx container. So that's exactly what I described before in the image. Let's do a cat of our services. So this is the next layer. So we have our deployment. Now we want to be able to have our service group these deployments and pods together. Cool. So again, our resource is called service. We want to name it nginx. I probably should have just called it nginx service. Sorry about that. That'll happen in part two. And we have our selector. And the label that it's selecting is app nginx. What we're doing is we're routing the traffic to port 80. So that's our service. Let's do a cat of our ingress. Cool. Once again, same deal. What we have is an ingress controller. We're naming this resource nginx. I want to add it to our dash ingress. But again, that's in part two. And then what we want to do is we want to... So what we're doing is routing traffic to our nginx service. And then we're routing traffic to port 80. And then we're exposing port 443 on our ingress. So let's go ahead and fingers crossed. This will work, I believe. So helm install part... Oops, wrong one. Part one dash helm. Yay, cool. So what we see here is all I had to do was do helm install part one dash helm. And part three, I will kind of show you the alternative of like sort of the pains of how to do this manually with cube control. And so what this shows, let's do... Our deployment is called filled a packet. So I like to always see the names of the deployments because they like randomize it. And sometimes we're like just really funny. Cool. And so that is there. And then let me see if we can actually hit this endpoint. I think I hopefully... Okay, cool. Let's do HTPS. So it doesn't yell at me. And then port, yeah, 443. Ah, something happened. Oh, wrong port. Yeah, 443. Yay, welcome to Nginx. So we're able to access this via the internet. Okay, cool. Let me do... So now like the other cool thing that I want you to notice is like just like kind of notice how fast this all is, right? So let's do helm... I think the command is delete, I hope. And then it's... I'm going to delete this deployment and then it's going to tear everything down for me in a second. Cool, yeah. So let's do a queue, CTL, get pods. And we can see things terminating. So you can see everything like in terminating status right now. And we know that Helm is working, it helps us terminate all of our pods for us with one command line tool. So let's go ahead and CD into part two. And this is where I want to talk to you about the values file. And the values file, let's actually cut out templates slash deployment. Okay. So what this does is that we're able to have a lot of configuration variables, right? So you don't want to always like edit the actual manifest file itself every time. What you want to be able to do is just sort of plug in these configuration files. This is really a simple example to be able to change each deployment, right? So what we have here is whatever the value for scale is within our value YAML file, let me highlight that, is going to be what's in the configuration file. And then whatever value we have for tag in the values.yaml file will be placed there. So let's do a cat of values.yaml. Cool. So here we can see what we're doing is setting the scale as three. We want to tag it for engine X version of 1.13. So let's do clear. So let's do helm install part two. Helm values. Cool. And so let's like super double check this and do cube CTL get pods. So we can see that, yep, we have three pods up and running. And then what I want to do is cube CTL, exec, and then our pod name. And I just want to like double check again that this is the version, I think I said 1.13, right? So dash v. So all this is doing is exacting into that pod and like running that command. Cool. So what we have is the latest version of 1.13, which is engine X 1.13.7. And let's like do some upgrades and like rollbacks and things like that to show you how easy it all is. So once again, you'll note that like all this stuff is coupled together, right? Like one version is one set of manifest files. Okay. So I'm gonna go ahead and do clear. I have a little cheat sheet here in case I forget. Okay. So what I want to do is do a helm upgrade. And I have set and what I want to do, oops, I want to set the values that I had before that you saw. So I want to set scale and I want to set image. So let me do set scale equals let's go crazy and say nine. Okay. And then what I want to do is tag, let's say 1.12, oops, what happened? Okay. Tag 1.12. Like let's, I don't know, revert back for some reason because of reasons. Actually, let me do helm list because we also need to set up the release there. Okay. So tag 1.12 put in our release. And then what we want to do is using we want to upgrade with part two, right? So part two. Oh, thank you. Sorry about that. Good call. Okay. Release. Oh, okay. Thank you. Thanks y'all for the help. That's awesome. Okay, cool. So now let's go ahead and do kubectl get pods again. Okay. So we have some terminating. Let's see which one's the youngest one. Okay. Nine seconds. So what I want to see is that this is version 1.12, right? So let's do kubectl exact and then the pod name. So this is the nine minute one. So that means this is like a new one dash dash engine x dash v. Let's see. Oh, I forgot to let's see what I did. I think I probably hold on a second. So kubectl exact engine x dash dash. Okay, let's do some live troubleshooting here. So let's see get pods. We have Oh, thank you. Live coding. cubectl exact pod. And then we're going to put our pod dash dash engine x dash v. Okay. Okay, let's see. Thank you everyone. Bods. No. Okay. Sorry about that. Okay. Let's okay. There you go. Sorry about that. So yeah, it's like, but yeah, so now we see that we have upgraded to 1.12. And then we should also see that we have nine pods up and running. So kubectl get pods. Cool. So now we have nine pods up and running. So now let's go ahead and do a rollback. So what I want to do is helm rollback. And then I want to upgrade to revision one. So let's do helm list again to see what deployments that we've had. So helm rollback. And then this is our release and then revision one. Cool. So rollback was a success. So hopefully now what we want to do is check to see that we are now back on version 1.13 with three pods of our deployment. So this is going to roll back to our previous version. Cool. And let's do kubectl get pods again. So we can see that we have a lot of pods that are terminating. So this is again getting back to our desired state. So let's pull up this pod name. And then we're going to exec into this. And I'm going to make sure that this command is correct. And cool. So we're back on 1.13. And then what we should have is again three pods only that are up and running. Cool. So this was our demo of helm. Let's go ahead and kind of see the alternative of how kubectl works. So I'm going to do helm list again. And I'm going to delete this deployment and kind of show you the pain of like releasing each resource on its own. So helm delete. Give it a second. Cool. So let's go into part three kubectl. So this is I kind of wanted to show you the alternative, right? So this is like if we're deploying each resource on its own. So let's do kubectl apply dash f. And then what we want to do is actually let's do something out of order. So let's see what happens. We want to deploy the service. We want to apply dash f service YAML deployment. And then also we wanted to deploy an ingress controller. So if I had created a secret, for instance, there is like certain orders that you need to be able to do everything in. So you can see already that we have three commands to be able to do the same thing that helm did in one. The next thing I want to do is just quickly sort of like set the image to 1.13. So let's see how to do this. kubectl set image deployment slash engine x dash deployment. And then engine x equals 1.13. Oh, I think it's just engine x. Sorry about that. Okay. And then the next thing that I want to do is scale my deployment to replica of nine. So currently we only have three replicas. So kubectl scale deployment. Engine x dash dash replicas equals nine. Okay. So the next thing, so you can see it already that let me scroll up. You can see already that I have like done, let's see, one, two, three, four, five commands. And the other thing is like in terms of versioning, how are we able to version this, right? What we did is what we all we did is like release each resource on its own. There's no coupling of resources sort of releasing the deployment together. And that's sort of the value add and benefit that helm provides for you. So hopefully, if you're a beginner, I hope you learned a lot in terms of how to get from a web app to cluster and version handling. And then if you are advanced, I hope this is a good way for you to sort of like teach your maybe junior engineers or newer people to your team about Kubernetes helm and how to get everything onto the cloud. So thank you so much for your time. I think I have three minutes for questions if anyone has any. Yeah. Yeah, so in terms of the question was in terms of deciding between kubectl and helm. So I think for kubectl, it's great in terms of like the development process. But on a beginner level, like when I was first learning what I did was I just did helm install consistently over and over again. So in terms of using helm, I think it's great for when you're doing like structured releases and that sort of process. And that's like the ultimate way that I think it's useful for and also like package charts and things like that, which is super useful. Yeah. Yeah, over there. So I think the question was, okay, I'm gonna try like, I'm gonna try to see if I like got everything was helm was useful for like, managing relationships and things like that, but you're having issues with database migration things. Yeah. I'm not sure that I'm the best person to answer that question. But I think there's a lot of like problems and solutions being happening in the storage world. And I'm not quite versed with that. But definitely, I think storage OS is actually here and they're like doing a lot of I think they have a talk tomorrow as well about like a lot of storage solutions for Kubernetes, which definitely like persistent storage and things like that of like having state for your databases is like a huge problem. I feel like the top in the communities world and it's like really interesting. That's a great question. I'm sorry I couldn't answer it. Definitely chat with the storage OS people. They seem cool. Yeah. Thank you. I think that's all for my time. So thank you so much, everyone.