 Good morning. Oh, hell no. You're not going to have me come up like that. Hold on. A few house rules. When speakers come to the stage, we do standing ovations before and standing ovations after. Anyone know what a standing ovation is? You can get to practice. Sit down. Good morning. Welcome to KubeCon 2017. Looks like all of the Kubernetes users are actually here globally. One thing that amazes me is that this still seems to be the community conference that has started. I want to give a big shout out to Patrick Riley and Joseph Jackson. They're here when they started this conference. They focused on community and it's to remain that way since. So let's give them a big round of applause. So I'm going to get right into your first speaker. It's me. And I'm going to allow me to reintroduce myself. Is that okay? Please welcome the stage staff developer advocate at Google Kelsey Hightower. So I was supposed to give a project. I love you too probably. Don't quite know you yet. So we're not quite there yet. So Twitter friends are not your real friends. So I was supposed to give a project update and going through the change log, I see that we hit a major milestone. The changes were so boring. I have no updates for you. And this was the goal the whole time to get Kubernetes to a place where you can actually build things on top of it, extend it and grow the community and the ecosystem from there and keep the core boring. If you're a contributor to Kubernetes, raise your hand because you've helped us finally get there. Big round of applause for getting Kubernetes close to boring. If you're new to Kubernetes, it's going to set you free. But first it's going to piss you off. While it's boring to many of us, there's still a lot of challenges for it. But one thing I want you to understand is that Kubernetes is not the end game. The goal as I see people do this all the time, they get Kubernetes installed everywhere and they go out and giving everyone kubectl. Who is that? Is that you in the crowd? I'm just going to give everyone our back roles and everyone just use kubectl. Kubectl is the new SSH. Do you remember virtualization? If you had to SSH into the server to do deployments or to do the majority of your job, you knew something was wrong. I think kubectl has now reached that point. If you're using kubectl to deploy from your laptop to production, you're missing a few steps. We're going to go through now as talk about how you should be thinking about all of these tools. Throughout the day, you're going to see lots of tracks. You're going to see things like Istio, serverless, Kubernetes, tons of moving parts. But the thing is, they're at their best when they're hidden from everyone in the organization. If you're doing it right, no one should probably even know that you're using Kubernetes. So we're going to try to walk through that today. That's my only slide, by the way. So we're going to be doing live demos. All right, let's do this. I'm going to switch to the slides. So we all know what Kubernetes is. If not, you'll learn. We have a nice 101 track today that will teach you a lot of Kubernetes from the ground up. What I'm going to do is try to talk about Kubernetes and the way you should be thinking about it. So I wrote this guy called Kubernetes the hard way so that everyone could learn how to do Kubernetes. There is a big difference from installing Kubernetes and using Kubernetes. Chances are, you're not on both sides, and we've come a long way. Raise your hand if you believe installing Kubernetes is easy. That's like seven hands. Here's my first demo. I'm hoping it works. I think it's gotten to the point where it's so easy that you can ask for a cluster. So we're going to try to see if we can just ask for a cluster and see if it will appear. Fingers crossed. Say one for the demo gods. So we're going to try this. No one sent me a DM or anything to interrupt the flow. Let's try this. Okay, Google, talk to Kubernetes engine. Sure. Here's the test version of Kubernetes engine. Good day. Create a Kubernetes cluster. How many nodes shall I provision? Creating a eight node Kubernetes cluster in the swest one region. That's what I call Kubernetes the easy way. What do you mean? All right. So after today, it's not hard anymore. What's hard is that collection of hardware that you have on-prem. That's not our fault. You've got to figure that one out. So the next thing we want to talk about is, once you have a Kubernetes cluster, again, our goal is not to use kubectl for everything that we want to do. So what I like to see people do and people ask me all the time, Kelsey, what is the best practice for running Kubernetes in production? The truth is, there are some known patterns, but I'm not quite sure everything is super mature and set in stone where we can go off and just have a complete list of best practices. You do this and you're good. So what I like to think about is separate clusters based on your organization structure. If you work in the enterprise and you have to open a ticket to get a deployment, you probably need to have separate clusters for each of your environments. Trust me, it will be easier that way. Do not go down the R back rabbit hole if you don't have to. But as a developer, how many people are developers in the crowd? Are you looking forward to getting your nice kubectl installed on your laptop? No. You see that ops people? No one cares. As a developer, you have probably one flow in mind. Everything else is noise. So I have this registry, or they have this app, it's called Hello World, complex, I know. And as a developer, this is how you envision the world. And it's all of its perfection. You want to clone a repo, not install Kubernetes, it's not your first step that comes to mind. I'm going to clone the repo that I'll be developing against. We'll do this here. We'll go to my laptop. And I'm going to do get clone. Amazing. Internet works. All right? And then as a developer, you want your actions to trigger something else to happen. Ideally, if I make a code change, all I want is a URL to tell me where it's running. You get bonus points if you give me metrics to tell me how well it's running. So the next thing we want to do here is create a branch. So we'll say get check out. We'll create a branch called change message. So we're going to change it from Hello World to something else. So I'm going to come here and we're just going to edit this app. Go is the best programming language in the world. I'm not biased at all. Yes, you can clap for that. Hell, yeah. All the PHP people like, no, it's not. It's okay. All right. So we're going to change the message. And we'll look at the diff here. All right? Looks good to me. Now, the contract between the developer is how to build this application and what the dependencies are. That's what the developer owns. And that's okay. Contract to provide. And that's where the Docker file comes in. Sweet. Those are the only things I care about. My source code, my Docker file, everything else is implementation detail or should be. So once we have this, we'll commit it. So get, commit, always use helpful messages like change message. Oh, yeah. Who does this one? Do you hate when people do this? It's like, what did you add? So you got your message and now we're going to push it. So remember, we have this branch. Get, push, origin, and then we're going to push this branch. Okay? So at this point, what do you expect to happen? You know, switch to Coup CTO, write some YAML. Lots of people like that. Sweet YAML. Nobody wants to write YAML. What you expect is when this gets checked into a branch that it should be somewhere ready for me to test in a staging environment. So we go to, I'm going to just go here so we can see some of the implementation details. If I hit, let's look at our bill pipeline. How many people have end-to-end continuous delivery bill pipelines? Damn. We should have had a continuous delivery track. Because I think once you get to Kubernetes, this is the next Holy Grail that you have to do. At some point, this is going to be table stakes. I think it already is. But this is where you really should be focusing your time, big time. So here's this bill pipeline. I have a few rules based on a developer's intentions. One is if I push to any branch that's set for master, based on this regular expression, I want to deploy to the staging environment. If I tag it, then we want to go to QA and we'll build a container image based on that tag and propagate it to production. That is the goal. Now some of you are like, where's the YAML files? Where's the deployment descriptor? Implementation detail, folks. No one cares or shouldn't. So now we go here and we look at our bill history. So I push to this branch and we see just now a bill completed and succeeded. It built an image with this particular commit. So this is the commit of the repository so the developer can chase it back to see how it was built. And then if we go to our pods and I hit refresh, we see we have our app in the staging environment. Now you don't actually have to show this to anyone. Ideally, you should have a URL that maps to this running application. There's no need to use QCT to figure out what the IP is. Anyone ever heard of this thing called DNS? Oh, it's sweet if you've never used it. People reinvent DNS all the time. What are you doing? That's DNS. No, it's not. It's a distributed service discovery. DNS is what you invented. Again, this week. So we see our changes life. So what happened here? Let's not worry about it quite yet. So once it's in staging, the next thing I want to do is maybe it looks right to me, I'm going to merge this, I'm going to tag a release. Hopefully you're tagging releases and not building Docker containers on your laptop. If you are, be in shame and then change your behavior. All right. So the next thing to do as a developer is I'm going to check out master and then I'm going to merge in the change message branch. Everything looks good here. And now I'm going to push it or gin master. This is the experience when you adopt all these tools that you should be after. Do not leak your implementation details into the developer workflow. You don't have to. Once I push this change, nothing should happen because I haven't done the thing to trigger. Everyone at once, what do I need to do to trigger the next step? Tag a release. So we're going to tag this release. Get tag 1.0. Get push origin tags. So now that I've tagged this release, my workflow, I assume at some point if the bill passes and the test pass that it should end up wherever you tell me QA is. I don't care where it is. Give me a URL. If you give me a URL, we'll come here. And we know we need to go through our bill pipeline. So we'll take a peek at that. So if we go to images really quickly, and we'll see down here in the hello world, we see that right now our bill is still probably going. And when it's done, what we should see is 1.0 tagged here. And if that's successful, we should see that deployed to the next environment. I'm impatient. So I'm going to hit refresh as if it's going to go faster. It probably won't. Let's look at the bill history. So let's see what it's doing. We'll take a peek into the bill pipeline. So what I'm doing now is checking out the repository, grabbing the tag, and then I'm actually using kubectl to actually check out my YAML file from a separate Git repository that actually holds my Kubernetes deployment structure for that entire environment. Now, the nice thing about this is that you can actually have multiple teams work on those Kubernetes configs independently of the pipeline. And what I'm doing in this case is I'm checking out the infrastructure repository, and I'm using the kubectl patch command to patch the one container image that I'm concerned with, and I'll issue a commit to the infrastructure repository. I want a history for all of these things. So if this works out well, I can go to my QA infrastructure repository and see now we have a new commit. If I look at that new commit, what we'll see here is the change that I expect. And again, no one needs to see this repository, but it's a good place to actually track all of the changes that you do, whether it's a human doing the change or a CICD system doing the change. So once this is in place, where do you now expect the deployment to be? Let's try with just the URL without looking at anything else. So we come here, and we'll go back to DNS, and then we'll just hit QA. So it looks like it's working, and we'll verify the deployment here, hit refresh, and it's there. Now there's a missing piece here. If you don't want people asking for kubectl again, then you've got to give them visibility. They can't just be flying blind here. That's when people start asking for access again. So you've got to give them more tools. Now, anyone notice anything hidden in my deployments? I'm going to show you something here. Kubectl, Git pods. Actually, I'm going to show you the functionality that I get first. So I'm going to go here, and I have Grafana running over a proxy, and I'm going to show you what value looks like. We get caught up so much in the tools that sometimes we forget about the actual value that they bring. What you want to do is give people visibility. One way of doing that is to give them a curated dashboard that gives them insight into what's actually happening. So this is my QA dashboard, and ideally, if someone were to hit the QA URL, maybe they're doing some testing, you've got to put those sleeves. Oh, we need curl, actually. Okay. So now we're hitting the actual application. Maybe you're running some integration tests. So ideally, if you give people visibility, they will stop asking for tools like Kubectl to do their job, because now they can actually just kind of observe what's actually happening in the cluster. So in addition to metrics, you probably want logs, HTTP traces. The more visibility or tools that you give to people, the less they're going to probably want to touch the infrastructure, right? And that frees you up to actually upgrade it, move it around without people getting attached to where it's actually running. So once everything looks good in production, what do we expect to happen now? Not everyone can deploy straight to production. This is why I don't like to show these end-to-end pipelines where you go straight to production. Anyone ever had an outage in production? What does the developer say? It was a small change. That everything is down, not my fault, man. So you might want to put that manual check in there just to say what's actually happening. So let's take a look and see what's going on. So in my case, I need to trust but verify. So what I'm doing in the last stage of my pipeline, I'm just going to issue a pull request from the build system. The build system will also propose changes to the infrastructure so that the team can understand what's actually happening until we get a little bit more comfortable with the system. So in this case, I'm proposing a commit message by this system that I want to change this particular container image to this tag, the same one that's used in QA. A lot of people get confused about this one. Do not rebuild the container image between environments that defeats the whole point of testing in QA. But maybe between stage and as you're kind of prototyping and figuring things out. Review the change, looks good, 1.0. And then now what I want to do here is review it, looks good to me, rebase and merge, confirm, and you guessed it. If I do that, what do you expect it to be? You expect it to be in production. So we'll look at the last part of the build pipeline. So we'll come back here. And as soon as I merge into the master branch, whether I was doing it as a human, maybe I want to change the entire structure of the deployment config, maybe I want to update the service type that's underneath there. Again, all of that is subtracted away from the developer because what I'm doing now is I'm taking all of my infrastructure stuff and I put it in this one directory structure. And what you can do is kubectl apply recursively and everything underneath the Kubernetes directory, it will just apply again, including the changes that you made from the CICD system that are just concerning the app. So that means your services, ingress, all your deployments, if they're all combined, and that way you can actually stop messing with this thing once you start to get to a stable point where you're not changing your configs as much anymore. Now, what tool should you use to manage your configs? Go to some sessions. There's tons of them. There's Helm, there's case on it. There are many of these tools that will make it much easier than kubectl patch to manage these things. So once you all have this up and running, the last thing that you think about is how do I actually do other changes without going to the pipeline? You need an easy way to break the glass. You can't have everyone, there may be emergencies that you need to sidestep the process at some point. You'll notice one thing here as I wrap up is that in our infrastructure directory, when you look at my deployment config, you'll notice one thing missing. I don't include the replica count. If I don't hard code the replica count here, I'm free to use an auto-scaler, okay, side by side, or you can use some other system within the infrastructure to do this. So we're going to see how we can actually integrate a last system to see what this looks like once we get everything working. So what we're going to do is look at the number of pods in production and see what it looks like when another system comes up on top. So here we'll say, while true, get the pods, context, production. So if you're new to Kubernetes, you can actually use kubectl when you have to. And if you do, it actually does support multiple clusters. Go away, Siri. All right, we see we have one pod running. And since the replica count is not included in my config, we'll do our last bit to see if we can actually have multiple systems interact with the cluster because Kubernetes maintains the cluster state. So we have multiple actors interacting with these configs at the same time. So here we go. We're going to try this last thing. Demo guys have been on my side so far. Okay, Google, talk to Kubernetes engine. Okay, let's get the test version of Kubernetes engine. Greetings. Scale the Hello World deployment. How many replicas would you like? You crazy. You're going to break everything. 10. Scaling the Hello World deployment to 10 replicas. Thank you. I gotta admit, that was pretty dope. And with that, I would like to end the presentation. Thank you.