 Okay. Hello. Welcome. Come on in. So we've got a couple people still coming through the door. It's just about 245, so I'm going to get moving with my talk about local development using Kubernetes. Yeah. Feel free. There's a good number of seeds down here in the front right. Feel free to point people in this direction if they come in late. All right. So accelerating local development with Kubernetes. I am Ryan Jay. You can find me online on Twitter, GitHub, and IRC. This is usually what my avatar looks like. I am a developer advocate at Red Hat. And when I first submitted this talk, there were terrible things happening, and I believe there still are, in the way that we describe, and by we I might mean you, how we all describe Kubernetes, especially when we're talking to developers, right? How many people in this room feel like they are primarily a developer, more than an ops person? Awesome. All right. You hopefully have a question. How many of you here have used Kubernetes for less than six months? Excellent. All right. Well, this is a very introductory kind of talk. I'm going to try to use just the upstream tooling and show you as much as I can from there, and then we'll talk about what additional tooling you might want to layer on as well. And so here's my, let's go with pitching Kubernetes. I think it's because it's primarily it was a tool that was kind of built by and for site reliability engineers or operations folks, or really folks that are looking at use cases that are about system reliability, high availability. You know, there's a lot in here that people kind of package into their description of what is Kubernetes. And it rarely has any type of value or any type of resonance with developers. And often it's loaded with boatloads of terminology that is just more insult to injury. You know, there's load balancing, scaling, delivery automation. These are all things that I could care less about when I'm doing local development, usually. So this has kind of been a problem for the Kubernetes community, and they've just been focusing on their core competencies, which they should. They've been keeping their eye on making things faster, better, more performant. Yet at the same time, if you ask Kubernetes core maintainer what is an app, they will very likely struggle to come up with a clear definition, because there isn't a clear upstream definition in the Kubernetes community of what an app really is. Because a lot of it, they'll say, well, it depends. Are you talking about workloads? There's so much in there. I mean, people started off with Docker, what's going to be what solved it all, will containerize all the things, and then as soon as people realize we need to scale these, decompose these monolithic applications, reflect what's happening in the microservices community, and allow these pieces to be scaled independently, then you see this model start to fall down. And people go from Docker images to maybe swarm files to Kubernetes spec files, maybe onto charts eventually. There's also a proposal around using label selectors, where you could do a Kubernetes command line query, get all dash L to match on label selectors, and then look for this label with whatever your app name is. And whatever the response from the API is, that's what your app is. But that's not a very clear standards-compliant definition that's developer friendly in any way, right? Especially if they don't already have a lot of Kubernetes access to play with. So there's a proposal I've linked to around label recommendations. I think if you're an app developer, definitely look into whether this proposal fits the bill for you. Does it meet your needs? Does it make sense as far as how we model apps? I think there's still a lot more work to do in helping define this term. But what you can do in the meantime is start talking about Kubernetes in a way that hopefully makes a little bit more sense to developers, and I'll try to show that today. So this is where I want to get us. I don't know if we can get there today, but what I want everyone to confidently say eventually is, why Kubernetes? Development velocity. That's why, right? But that's not the agenda of today, you know, the conference. So I'm going to do a quick kind of a case study to help illustrate this kind of conversation of how I would pitch Kubernetes locally to my team, and to guide us through this since we're in Austin, kind of a music-based analogy, if you will. So this is a theoretical company, enterprise records incorporated. Here's the picture of the band. I'll point out a couple of folks we have here up at the mic is our architect, or perhaps a lead developer, but someone who is definitely running the show, right? He's our front man. Just behind him we've got Will Farrell. That's our front-end web developer, right? And the product team is over on the far end in the nice jacket. So generally, operations teams will come in and they'll realize that the dev team's been working nonstop. They've been accumulating technical debt. They want to help increase your overall velocity as a company, but they know they need to fix some architectural things in the process, right? And so you may come home from KubeCon and start blathering to everyone in your company about, hey, look at stateful sets and look at this and throwing out term after term, and if you're not careful, people are going to be like, what is this guy talking about? Why is he so obsessed with things that have nothing to do with what we're responsible for? We need to ship. We need to ship products. We need to ship features, and we need to ship them faster than our competitors, right? We're not shipping Kubernetes. We're not shipping developer tools. Go find the best in the industry, use those, and let's get back to business is generally the conversation and the inertia that you need to overcome. They're always going to want more, and the web team is going to honestly want to know how do I maintain my velocity while using containers? It's a fair question, and the Kubernetes community needs to have a real answer. So I've tried to package, so don't screw it up for us, Gene. We need to get there to container town. We need to roll out Kubernetes. This is my approach to breaking through to the rest of your team. Minimal onboarding. I think your number one message should be getting started is easy. We'll cover getting started briefly. Share what you need to know, share what you know, and model your I.O. As application developers, one of the most critical things you're going to have to be aware of when you're developing container-based solutions is if you have random writes going to some part of your caching image uploads, you need to model all I.O. and mark certain folders as being read-writes, certain folders as being read-only, use read-only whenever possible, and if you expect that data to stay around, you're going to want to use a volume or some kind of persistence mechanism. So make sure as developers that is clearly in your scope to know where your data is as you're operating on it. And then the third one is really kind of evaluate what's available as far as tool chain goes and choose what's the best of the day. So here's the easy part. MiniCubeStart. That's it. It's one command. Well, there's one other curl. You could do two commands if you want to do a curl and then pipe it into your user local bin. You could do a one-line install and then a one-line boot up, essentially. But MiniCubeStart is the easiest way to get going. I'm going to open up a terminal and just paste in just so you could see how easy it might be. This is using a virtualization mechanism to spin up a VM. They also have a new way you could do dash dash VM provider equals none and then it'll try to skip the virtualization and spin up Kubernetes using only containers. You need a Docker engine or something for that. But it looks like I'm up and booted. I didn't need to sign up. I didn't need to wait for the ops team to provision the servers. I'm now unblocked. So this is the easy part. Make sure that everyone knows not having a Kubernetes environment ready is not an excuse. Staging's down. Ops isn't ready. No. Everyone gets a Kubernetes. You all look under your seats. You all get a Kubernetes. MiniCube is the best way to get started on that. MiniCube start. I have a link to the official MiniCube docs if you want setup help. I also have a set of slides I put together. I do a lot of presentations and so I've put together a series of getting started with Kubernetes. That's all about half an hour to one hour chunks. So if you end up at this bit.ly k8s MiniCube, I've got about four different half an hour long chunks of one. One is just the setup next one is getting familiar with Kube CTL. So if you want to learn more, I've got a whole series on that. Share what you know is the next big tip. And like I said, model your I.O. as developers. That's going to be a major part of what you're going to need to be aware of. Here's one way I like to share what I'm working on. So if I'm starting from scratch and all I have is a repo and I've maybe a docker file, I'm not going to start you guys entirely like you don't know what containers are. I'm assuming you're here at a Kubernetes conference. I'm just going to assume you have a docker file or a docker image or some kind of container image that you could Kube CTL run. So if I do this Kube CTL run command, I can name my application, if you will, load this image. And while I'm at it, I can also add a load balancer that is compatible with Minikube and then add this dry run flag. This is kind of a complicated example, but dry run allows me to do something particularly special that allows me to share what I know with other folks. So normally if I ran this command without the dry run, it would immediately go provision the new container using a deployment and it would set up a service or a load balancer so I can contact this web service. If I add the dry run flag, what it will do instead is prep up all the JSON or YAML depending on my dash O output, prep up all this data and write it all to standard out instead of to the API. This allows me to quickly generate manifests or spec files that I can share with my team. So if you're having trouble getting started or if you want to generate whatever your starting point is and then share that with another web dev, try using the dry run flag as a way to generate your specs. I'll run an example of this so we could see what the output is. Let's see, no outputs? Oh, I piped it all to this metrics file. Here we go, metrics review. So it looks like the first data element I have is a service and then I have a deployment. So this is now something that I can take, oops, I can take that run kubectl create-f on that file or hand it to someone else and allow them to run kubectl create-f and they have their local development now staged up with whatever I wanted to deliver. They are now at their, I was able to deploy Hello World, I haven't iterated on Hello World yet, but I've deployed it locally and that's one significant step in your onboarding process for new developers. Next they're going to want to test that. Many kubes service allows you to run up that new deployed resource in your browser. It looks like this is still downloading the container but it should pop open in the browser as soon as that downloads completed. Next up I have an example of how I would use this IO modeling or an aspect of IO modeling to enable real-time iteration and development against this container I've provisioned. So I'm going to make a local clone of this repo so I have something to work with here and if the wifi holds up see what we'll be doing is checking out a copy of this repo then using the mini kubemount command to mount the repo folder inside the virtual machine. So now it's essentially exposed to the node or the host machine inside the mini kube environment and I'm going to expose that inside the VM on VAR-HTML. Looks like I've got a little bit of a weight here. See if there's a network cable. They told me there would be Ethernet. Okay, here's our deployment has finished. Looks like part of our download happened and hopefully that cleared up bandwidth for the rest of my network connection but this is what I was deploying. It's Kubernetes contributor graphs that I was kind of tweaking on some data for contributor graphs and this was an example project that I wanted to iterate on. So it looks like we've created the new open the browser session. We've got our local clone of the repo. Let me get out of the full screen mode here. Oops, too far back. Dry run. Modeling your I.O. We did the clone. Now I can mount this folder inside the VM and I'll need to leave this window open to keep that connection available to mini kube. Next, I'm going to make a copy of metrics review and I'm going to modify it. This is something that you will have to do by hand unfortunately but now that you have a starting point it's easier to go in and look up what is a pod, what is a service and then look up the actual spec, find out what fields and what values you might want to tweak and then go based on the spec now that you have a valid starting point. So this I actually have a completed copy of all of these modifications inside the repo at this stored in this file metrics dev but all it does is it takes the earlier file that we used dry run and adds a volume mount into the pod spec. So I'll take a quick look at that from the command line. It's metrics dev so we could see, looks like there's some volume specifications and it's going to do a host path mount from this folder on the node host into this folder inside the container and if you look at the docker file for this project the entire repo is basically sitting inside this folder so I'm going to mount a raw repo inside a running container that's basically serving this repo as a static web so it's a trivial example but what you ought to see is that I'll have real-time access to editing the HTML and CSS and then I could just reload with my browser and use a full container based tool chain. So let's go back now that I have the resulting file I can minicube create on our metrics dev example and then if I really wanted to get fancy let's take a look at, let's see if that gets us anything let's go into minicube service not running yet I think it's still downloading or something's happened 404 not found I think it's having a problem with the mount and I appear to be having a failed demo but what this should do is basically give you the same view as this page the only difference and I'll fix the demo, sorry about this the only difference is I would be able to flip to the command line edit the index file look for my title here contributor metrics and then change it to kubernetes contributor metrics something like this and then as soon as I reload the page on that dev version I should immediately see the changes reflected in the browser that's basically the steps that you need to know sorry I'm having this 404 error on my metrics dev basically what we've done in the last five or so minutes is set up one deployment to be used for real-time dev and another deployment that I can actually roll deployments to and then I can do real-time development testing and then once I'm happy with those commits and I'm ready to ship that then I can do a docker build still haven't committed yet but do a docker build and then do a I can use this minicube docker end command once that will allow me to do it'll basically bind to the docker demon inside the VM and then when I run docker build the result of the build goes direct into the VM and I don't have to go out to docker hub and back down to my machine so I could be developing fully containerized fully offline in airplane mode you know while I'm flying over the ocean and potentially have a full multi-service application deployed, developed, complete with Kubernetes all on my laptop this is going to boost your memory expectations of course but if you're developing with small solutions or scaled down solutions this gives you a much more production like representation of what you're going to be working on and hopefully less surprises when you do go to promote your code to production any questions about that that's the easy part any questions on the first half I'll take a question quick so the question was what are the virtualization options available for minicube there's a pretty solid list there's a libvert KVM one I think you need one extra download to get that to work there's also support for using the Apple native virtualization VMware fusion KVM solid list yeah and all of that's available on the minicube docs yeah there are you will see sometimes mixed results if you're using different virtualization providers sometimes mounting external files into the VM works great with one with virtual box but not with KVM or vice versa right so some of these you may have to do some experimentation figure out what's the best fit for your develop I'm running on linux here but if you're on all Apple maybe you want to use xhive as long as this mount operation is working appropriately another thing that's going to be a big boost in the future I'm having problems with this external mount thing in the future with with those latest 1.9 release I know there's support for using local disk for persistent volume claims and so hopefully you could just lay out a larger size virtual machine disk and then use that disk for persistent volume claims and other kind of resistance mechanisms that might should really help with some of the modeling you're doing with Kubernetes so now we get to the hard part so in some ways you know there's a popular quote in some ways the future's already here but man it's sure isn't evenly distributed you're going to have different teams with different areas of experience different requirements from different teams and other people will be at different states of adoption some of them fully containerized some of them with a lot of legacy workloads that they are stuck with so this is the kind of a typical adoption path that I've seen people generally kind of start with playing with Docker eventually they're going to need to start modeling their IO creating volumes persistence volumes mapping those into the containers hopefully they pick up a cube at some point and then have an easy way to model Kubernetes abstractions right on their laptop and having that as a just a modeling language on your laptop I find incredibly useful next after that you could definitely if you're looking to share what you know look into charts the Kubernetes charts are the official way of packaging up these solutions they don't currently they're not really labeled as the app definition but they are the packaging mechanism for Kubernetes if I'm not sure you know that's kind of how they're described OpenShift templates is another option or you can use your web developers I'm sure you know how to do templating you could use Jinja templates you could use really any kind of templating system you're familiar with and hand roll your own specs once you know what you're dealing with right so maybe start with dry run generate a couple manifests and then figure out which values you need to modify frequently and develop workflows for assisting with that whether it's hand rolled manifest charts or you know eventually people are going to be looking at you know this new service catalog or even leveraging a full pass solution and really the more you add the closer you get to pass here so so I have a from here I've got a variety of solutions that I wanted to give the audience an opportunity but my theme for this is keep it simple make it try to describe these in a way that is easy for developers to understand so from the audience who has heard of draft here okay anyone want to offer a description hopefully without using the word container or kubernetes anyone want to offer something no Tom go for it actually I got a mic as well Tom yeah I'm calling on you Joe I'm sorry Joe yeah okay so in one sense I guess I would say that draft is and this is almost direct quote from the official documentation any vent driven scripting system for kubernetes oh close close that is close anyone else have a guess what I think of draft draft is really kind of your so event driven scripting system that's next on my list thank you Joe and so sorry event driven scripting system is the next one but draft is from the azure team it's the I think one of the best ways to get yourself a starting point if you are not familiar with the dry run command or you want to get a good starting point use draft in your repo it will attempt to produce a docker file for you and your kubernetes manifests so you can deploy that's my lowest simplest tool make it easy to get started with draft charts is my next one and my simple explanation is just share what you know helm and tiller have folks heard of helm and tiller more folks than that have heard of charts you use chart helm and tiller is what deploys your charts this is how you should come on okay alright here we go here we go Joe would you like no I'll quit picking on Joe brigade is their event driven system for producing workflows if you would like to see a demo head on over to the azure booth I'm sure they would be happy to give you a demo it's a new project so I don't have any demos prepared for this but definitely give it a look if you're interested in delivering workflow or event driven systems I think they've definitely produced a really good collection of developer tools in in the past how many folks have heard of telepresence not too many okay this one I think is particularly important if you're using kubernetes on your laptop if you are spinning up large numbers of microservices and you're on a macbook air how far do you think you're gonna get you know you can get a couple containers going but with only a 8 gigabit memory max on your on your hardware you're gonna run into some limitations real quickly so you can customize many cube and you could say I want to give it more memory or less memory or you know there's a lot of customization you could do there you give it more CPU cores but telepresence really helps pitch in by allowing you to leverage a hosted kubernetes environment and then make those hosted services appear as if they're running locally on your local many cube environment so it uses proxies and so you could have a large maybe you have a full scale oracle big oracle database somewhere else but you want to be able to leverage hopefully not production data but production quality services during your local development telepresence is a good tool to look into access more with telepresence my last ones mini shift and O.C. I've got demos of mini shift down in the red hat booth when I submitted this talk I was in between jobs and so I don't want to put too much vendering in here but man the more of this tooling you layer on top the more complicated you get the more you start to get into that problem of drowning people in terminology and getting distracted from your core mission of delivering product and features and if it all all the talking and all the communication goes to help accelerate your velocity and help the bridge the communication gap between the dev teams and the ops teams then that's exactly what it should be doing but make sure you don't overwhelm people with a boatload of new terms fresh out of kubecon try to give them just what they need give them a solid starting point mini shift and O.C. will give you a it's basically just like mini kube except it spins up an open shift environment instead of a kubernetes environment open shift is fully it's a kubernetes environment currently at kubernetes 1.7 if you're using the open shift 3.7 release so these basically give you a more secure multi-tenant safe version of kubernetes one of the unique things about it is none of your processes we won't let you run jobs as root it's not a best practice for folks at red hat to run every process at root as root usually not a good practice in production I would argue if you're not doing it later down in your pipeline why do it in local development if you don't need to run things as root in local dev then why do it there either but generally docker run and kubectl run is generally going to run things as the root user so if you have PCI compliance, HIPAA compliance any kind of aspirations towards security at all take a look at open shift and I'd be happy to give you a demo of what we offer down in the red hat booth so hopefully I've given you a simple overview just a small couple pieces of what you can learn what you can share here's a collection of other learning opportunities the kubernetes IO tutorials are pretty solid there's also a good collection of developer focused education on the catacoda interactive examples I have a lot of developer focused workshops packaged up using these revealed js slides and those are easy to fork and share so if you want to fork any of my talks feel free steal them and let me know if you get some use out of them we also have an interactive learning portal at learn.openshift.com and a challenge to developers in the red hat booth if you complete one section of our learning we've got a free hoodie for you so definitely stop by the red hat booth while we still got them in stock so developers want to get ahead definitely model your IO and share what you know architects figure out who owns manifest creation right you need to have a solid way to maintain and distribute that that's either going to be your dev leads or your architects but it might not be your web developers initially at least QA folks look forward to saying sorry works on my kubernetes operations folks mostly hopefully it's getting pretty boring for operations folks or it should be I know there's still a lot of work to do with kubernetes but primarily you want to be focusing on keeping the environment running, upgrading the environment and not getting involved in too much, yeah also you want to loop in security security is a big thing to keep in mind and being able to organize your work in a way where you don't have things like a situation where your front end web developers are maintaining your docker images and having old versions of lib SSL or outdated core libraries you want to have a solid mechanism for patching your workloads monitoring your workloads doing security analysis and usability analysis on your workloads so my send off for you folks definitely join the community on Slack in kubernetes users and in SIG apps most of the developer focus content happens in SIG apps come and join the conversation share what you know share your use cases and experiences and help us develop a range of solutions that expose or hide kubernetes in an appropriate amount of visibility for developers there's a lot of great videos on the SIG apps YouTube channel as well so definitely check that out hopefully that will help you learn to deliver consistently with containers choose the right tools for the job if you are building a Paz recognize it and maybe use that tool if you don't need a Paz if you have a good collection of tools to leverage and then hopefully you all get back to making gold records right that's about it for me thank you all for sticking around