 All right, I'm gonna go ahead and get started. Looks like we have a majority of the people that wanna be here, here. Except that person in the back who's coming in. You didn't miss anything? No, you're good. You're good. Come back. Welcome to my talk using MiniCube Kubernetes for Node.js development. A little about who I am. My name is Troy Connor. I'm a software engineer at Emerging Technology Advisors. We usually just say ETA because it's really long. What I do, I tinker with robots in JavaScript. It's been a lot of IoT stuff so I've been at all of those talks. I help maintain an NPM module called N for Node version management. So I think something came out today, 8.1 or something. And I'm a US Navy veteran. And that's how you get a hold of me on the internet. Troy's 0820 and all things. All right, we have a productivity problem. I know where anybody works, you actually are told to push out apps faster. How do you push out apps faster? You can only develop at a certain pace and whatever tooling you're using only gets it out there as fast as it can. And I'm not talking about from a perspective of you just need to code faster. You can buy a keyword that might help you, but actually getting the app to users is the problem that we face now. It's not the tooling we're using. So how do we change that? For me, I used to come up with a whole bunch of excuses so I wouldn't have to deal with that. And here are some of the excuses. My code is compiling. I don't know if anybody ever used that one. That's a feature. I love that one. But my favorite one that I used to always use, I can't use it now because what I'm gonna show you with the gates all that is it works on my machine. And we didn't know what the problem was because if it worked on my machine, it has to be DevOps for it wouldn't be, right? But there's only one problem. We're not shipping my machine. And I'm glad because I get to keep it. But something that fixed that is Docker and that's like the underlying thing what helps containerize applications. So during a lot of these things, talks you've been to today, you've been hearing containerizing, pushing stuff up into deployments and things like that. This helps that it works on my machine perspective. So you can't use that excuse no more. And this is what my workflow used to look like. I used to just make the app and then leave it alone and then get paid. And it's DevOps problem now. That's not mine, I don't know. But then Docker came out and that was supposed to solve the problem, right? Or solve the problem. But now there's a new problem. Your workflow changed and you have to adopt to how you can actually help DevOps and you collaborate. And if you don't have a DevOps team, that person might be you. And if you do, you don't wanna see them on a Friday when you push the production to do you. So you build your app, you run your local integration tests and then your DevOps team will kind of use the tool like Jenkins to build it and push your Docker image and then do all that automatically. But did we solve our problem? Because it sounds like there's more steps in those things. The first page just had like, I just build the app and that was it. Now there's four bullets now. But so how do we control it? We control our containerization projects with open source thing called Kubernetes. And Kubernetes is open source system for automating deployment, scaling and managing containerized applications. So now we have this tool that's gonna actually help us restart containers when they fail or give us the status of what's going on with our applications. But there's one problem. If you're trying to set up this in production on like, say for instance, a cloud service that is listed at the bottom, it's not really easy. So this is the problem that a lot of people face coming into Kubernetes that like, if I can do this with Docker, I'm fine there, but if we're getting to Kubernetes, I don't wanna do anything else setting up that's more than pushing a button because I'm gonna blame it on DevOps. And that's the goal of this talk, right? So, but you know, I use Docker locally. So like I learned how to like make images. So like DevOps kind of gave me that part of my responsibility now. And, but how do I work with Kubernetes locally? And should that even be my responsibility? And if you wanna help DevOps, and you know, cause they're gonna keep passing stuff down, it seems like the more I talk about this, the less DevOps has to do, right? And so you might work yourself in a job or work them out of a job, but we'll get to that later. So how do I do this locally? Let's just wait for it. Of course you can use this locally. That's why we're here. We're here to discuss MiniCube and how we're gonna work the magic that DevOps will not be needed in your company anymore. So what is MiniCube? MiniCube is Kubernetes on a single node cluster. So you pretty much have a VM on your laptop. You're using all sorts of drivers, virtual boxes, the one I'm using is X Hive, this KVM. And it makes a single node cluster and you have all the tooling of Kubernetes that would be in the cloud on your laptop. So if you were actually gonna use this as a use case to say, hey, company, please pay me, please pay for my provision environment so I can use Kubernetes to make our application better. They're gonna say, no, just look at Troy's talking, use MiniCube. So this is what you're actually gonna need to run MiniCube, like if you're gonna need CubeCuttle, I call it, yeah, I call it CubeCuttle, it's CubeCTL. You can get that there. For Mac, I'm using the Mac. You can either use those three drivers that I named earlier, and for Linux and Windows it's the same. And you're gonna need an internet connection on the first run because once you do MiniCubeStart, which I won't demonstrate because we have Wi-Fi, I don't want that to be a problem. It actually has to download the ISO image and actually put that on your VM that you just provisioned. So here's how to install it. That's actually the easiest way. I'm not telling you to go buy a MacBook, but those three words will get you up and started. And here's how you get for Linux and for Windows as well. So like I said, what MiniCube is gonna do is actually gonna act like your Kubernetes cluster in the cloud, but there's certain things that it can't do. When you use your Kubernetes in the cloud, you actually have a expose where you can expose your application on a certain port. And if you use type load balancer, it's actually gonna give you a IP address and put it right there. So it's gonna be public facing, everything's a win. You can't do that because your laptop's not a cloud provider unless you're not telling me something. So you have to expose it with no port. And it's not gonna convince your boss that you need to raise because of your new DevOps skills. And what you can also do with MiniCube is use the Docker version that's upgraded on your laptop, like the server version. So right now Docker offers, I think like 17, six, we're in June, right? So six, but if you do this with MiniCube, you're gonna get like version 1.11. So I mentioned that because there's a lot of things missing from like 1.11 through version 17.06 that you are used to maybe using that you're not gonna be able to do, but the functionality still exists. It's just how we go about doing it, which is great because now that you can bring this anywhere, it's gonna say, hey, if I use Kubernetes and GKE, for example, I know what I can do to make it work or if I use it in AWS, for example, or any other cloud platform that allows you to use Kubernetes. So people ask me, how is this any different from like Docker compose? Because Docker compose is something that locally will run for your dev environment to be spun up. So you can actually utilize, I don't need the other person who wrote the microservice that I need because if I have Docker compose, excuse me, I can just spin that up and everything's working. But if you wanna get to the point where you can actually do this in production and like do blue-green deployments, you're not gonna Docker compose on a cluster or try to, you wanna use real commands that Kubernetes allows you to do to actually start doing some of the stuff that I'm gonna demonstrate. So the project I'm working with is called Fixed California and I know it sounds like I'm complaining, but I'm not. It's basically, all this app does, it hits this API for C, click, fix it and it brings back all this data that says these are the problems that have been reported through this API and it just put some nice pretty on the map. So I thought it would be more interesting to do it here in California since we're here than back home in Hampton, Virginia because y'all don't know where that is. So what I'm gonna show you is that I have my Docker images that are on my mini cube virtual machine and I'm gonna deploy those onto my cluster. I'm gonna make replica sets. I'm gonna show health checks, auto scaling, the dashboard, the mini cube services. I'm gonna show you how you can use ingress to expose multiple applications with using one load balancer. So the problem with it before is that you have a whole bunch of microservices and you don't wanna keep provisioning a whole bunch of exposing them on a whole bunch of load balancers because now your Google bill or whatever you're using is tremendous and you probably will not be on the DevOps team anymore. You probably be back working with mini cube until you figure it out. So yeah, so now you can put stuff behind certain applications. So you have one load balancer but then you can actually use that ingress controller to actually use different end points for the services. So this is what I did before you guys got here and before I got on the plane to get here. I used mini cube version 18. I did mini cube start. That's actually gonna provision the cluster that I needed but if I stop it and start it, it's the same cluster so I don't lose any of the data. Mini cube Docker M is gonna give me the mini cube Docker environment so I can interact with the virtual machine that's hosting all the images. When I eval the mini cube Docker M, that is actually gonna let me use that environment and API or whatever. And I already did all the images that I needed to download. So I'll show you what I'm talking about. So is this too small? Can you guys see this? Little bigger? Or is that thumbs up for bigger or thumbs up for cool? Okay. Did you say little bigger? Okay, I'll show you later. So if I do Docker images, this is not on my local machine. This is actually on the mini cube virtual machine. So those last GCR.io's, that's actually what makes Kubernetes run. Those are the containers that actually make the replica sets and listen for pods to destroy and everything like that. If I'm going too fast, just let me know, I'll slow down, I'll try to explain more just any time you have a question. So if you notice there's Troyser820, fixed Cali version one, two and version three, and then there's an engine X proxy. So what I'm gonna demonstrate now is how I would actually run. Okay, I saw people I knew. Oh, I'll do it again. So what I'm gonna demonstrate now is how I would like to see like deploy this application to my mini cube cluster. So I aliased, let me see, let me show you, I aliased cube cuddle, the thing I told you that you needed to download to KB because I really don't like typing that much and typing in front of y'all and smith spelling words is not a good thing. So I'm gonna do KB run. I'm gonna call this fix it and I'm gonna press up because I did this already. And if you look at the last part where it says image pull policy, if not present, it's gonna look first locally and if it's not there, it's gonna try to pull from Docker Hub or Google Container Registry. But the reason, the whole point of this is that you're gonna make these Docker images and try to use what you have to start putting on this cluster or whatever. So I'm gonna use version one and version one, there it is, deployment created. That's not the party trick, so just hang on. So if I look, it's gonna show that I started make, I made a pod. And a pod is actually a collection of containers that are in the same name space that are gonna interact with each other. So what I'm gonna show you now is that I'm gonna actually put an nginx container in front of it and expose that so you can use the nginx container to get to the application. So I'm not exposing the one I just, because if I try to get there, I can't get to the application. So I actually have to expose this as a cluster IP. It all makes sense in a minute, I promise. Okay, so now I know I didn't say anything while I was typing, because if I were to talk, that would have spelled something wrong. But what I did was I made an nginx container and I have the fix it app. And what I'm trying to demonstrate by putting this nginx thing here is that Kubernetes has DNS. So it doesn't really need to know the IP of what I just deployed. As long as you say point it to the fix it app and that's the name of the service, it's gonna automatically hook up to each other. So then, like I said, see that service at the bottom that says nginx80.3, well, colon 310601, that's actually how I'm gonna get to the application. So if I do mini-cube service, not fix it because that's not exposed to anything, but if I do mini-cube service nginx, it should bring up my local browser and it should say, come on internet, don't fail me now. There it is. So that's the application. So at the bottom, there's a lot of things and there's some issues that we won't tell Santa Clara about until after we leave. So that's the first version of this app, that's V1, right? So if I wanted to change V1 to V2, I don't have to shut down this version of the app. I can actually just edit the deployment and watch it just automatically be fixed. So if I do edit deployment, fix it and it's gonna open up my Vim editor. Can you guys see that? I'm gonna scroll down because I don't wanna mess this up. And then we use version two and then it says deployment edited. So if you look right now, hopefully I'm fast enough, it's gonna start terminating some of the pods. Well, I only had one pod, so it's gonna terminate one and then it's actually gonna create another one. So the reason why, so now it's terminating, right? And if I do get pods, there should be a new one up there. So the one that used to be up there, the WVX9L, gone. The new pod is up there and it just transitioned the state to this app, this version, which has zip codes. So now you know the zip codes that are up there. So that was the little, oh, I didn't know what to do. I put the zip codes up there. They wanna see what's wrong in certain zip codes. So now the zip codes of this thing is on this app, on this version. So what would happen is that like now more people in Santa Clara are getting kind of understanding what's going on, so they wanna scale the actual, because I don't know if you guys were here for earlier talks, you saw that we scale horizontally. And then, so we wanna scale, get this five replicas. So if I scale this deployment, it's actually gonna start making more pods and it's just gonna start working. So once those get created, now you have five replicas of the app that I just deployed earlier. And where this is gonna help you is that when you start receiving a lot of traffic and that's why I put the load balancer in front of it because now it just starts getting, it doesn't matter where it goes, it's gonna route it to the instance and it doesn't have anything on it. So where we can see this is something that I'm gonna show you called Minikube dashboard. And this is an add on that's automatically hooked up when you start Minikube start. So the dashboard is gonna show you everything you wanna know about your deployments and it looks real pretty. You can actually do stuff from here. I'm not gonna do that because, but here you go, here's your pods, you have five of them. And if you look at the logs, you see where the memory and the bytes, some of them haven't even been touched. So you can actually view the logs of the container and it says, oh, okay, cool. We did the node app.js and we went to one page and we went to the next page. And that's what happened. So if I keep, I don't wanna do the curl because I'm actually, I feel like I'm running out of time. But I'm gonna demonstrate the next part where the health checks and the auto scaling could come in. All right, so I'm gonna clear this out. I'm gonna delete the two deployments so I can start from scratch where I'm gonna demonstrate the next thing. When delete, deployment fix it and engine X. And let's delete it. But when it deletes the deployment, it also deletes the replica set. So it's not gonna recreate it again. If I was just to delete the pods, it would actually start up all over again. So if I delete the service, okay. So I don't have anything now. So now I'm gonna use a different command called kbcreate. And this is actually gonna, well, let me let you see it first because that'd actually help you understand what I'm doing. So this is a deployment that I was actually running the whole time before. But this one's different because now I have this liveliness probe and readiness probe. And what this defines on the deployment is that when you launch this, it's gonna use, it's gonna keep hitting that endpoint slash health to see if your pod is ready. And if it's there, it's gonna say, okay, it's good. It's gonna keep doing that over and over and over again. But once it fails, it's gonna say, oh, we need to relaunch another one. So I do this because you can actually define the HTTP endpoints. You can do TCP ones and for your application as well. So I've done skills. So I'm gonna create this, okay. So now I created that and I still gotta expose that to the cluster port. And I'm also gonna do the thing with the engine X. I should have two pods running. Okay, so expose engine X. But this time instead of doing no port on the engine X, I'm gonna do cluster IP. So now I can't access these at all. But why I'm doing that is because when I use the ingress controller, the engine X has bundled with MiniCube, you're gonna be able to put these endpoints at certain things. So then when I go to like the MiniCube on my IP slash engine X, it'll route back to wherever. So right here, you see the services. There's no colon on any of those things. So what I have to do to create that, I have to create this ingress resource. And ingress resource is something simple. It looks like this. All it is is telling on path engine X, I want the service name engine X on path slash fix it. I want the service slash fix it. Well, if you remember, this engine X is sitting in front of fix it. So I actually don't really need to fix it, the slash fix it. I'm just showing you that when you use cluster IP, you can put that there and then you can use your ingress controller to put however many services behind it. So let me demonstrate that if I create it first. All right. So if I get my ingress, it's not ready yet. And it should give me an address soon. Once it gives me the address, it's gonna be available to, all right, cool. So now I'm gonna go to this slash engine X and it works. That was the party trick. So basically now it's just not secure because the SSO is not terminated with the engine X controller. And that's another tutorial for another day. But basically what that means is that you can put as many microservices behind this ingress controller and just give it an endpoint. So then they're all in the same namespace. So your Kubernetes cluster, your node app can just see everything that's in there. So it's actually gonna help you develop a lot faster because once you start putting things behind each other and stuff like that, you have all the resources you need. Instead of like spending up a Docker container that's gonna crash and then you say, oh, I don't know how to replace it or it's not on restart or anything like that. So what I do wanna show is that as Grafana, it's an add-on that's actually used with Heapster. So it actually gives you all the data that you want from your cluster. So you can actually go here after you launch it and then you see everything about it. And one thing I am gonna show, but I can't really demonstrate too well because of the fact that I can't get it to ping over 50% computer usage is auto scale. So if you want to auto scale your deployment, you would do auto scale deployment, fix it, minimum three, let's say minimum two, max five. So if you get your HPA, that's your horizontal pod auto scalar. It will say, okay, I always need two pods, but once it starts going above 50%, give me five. And then it can just do your horizontal in and out for you. So you don't have to worry about it. Then you can go home and then you can blame DevOps and then you can try to get that raise you wanted. So that's all for my talk. If anybody has any questions, I'm here for you. Questions? So I'm just wondering, like all the commands you showed us is running locally in MiniCube, are they the same commands that you would use when you deploy to the... Same exact commands. Okay, so that's the... That's the beauty of it because now this translates over to your skills that you're gonna try to get that raise for. And then you don't have to learn anything new. A lot of times what happens is that when you're trying to provision environments, you have to learn a new command line thing. And then like, oh, I don't really know what I'm doing. So you don't have to be afraid of this because if you're developing locally, you're trying to figure out the best way to arc it. Because what I pretty much just did is I just architected that word. I just did that here. So I came to figure out, hey, what do I need to do with this endpoint? Or how do I gonna get these services to talk to each other instead of using something else? If I want to use an endpoint where I want to use serverless technology, I can do that here before we even think about deploying it. So it's gonna save you money, which should make your company pay you more because now you have these new skills and stuff. Okay, thanks. No problem. Yes. It's a great talk. I'm wondering what is the connection between Cloud Foundry, since we're at the Cloud Foundry Summit with MiniCube? Oh, yeah, well, being in the Node.js track, I was trying to show how we can, because you can actually deploy Kubernetes on Cloud Foundry. And these are one of the things that they were actually discussing just before the talk, how this was a competitor type of thing. I don't know the words to use that you were saying, but I don't want to speak for you. And I know that- Some have observed there's some overlap between the function of Kubernetes and the function of Cloud Foundry and people are figuring out what that means. Yeah, and it's good for, because the same thing is happening with Docker Swarm mode and Kubernetes. So being that the functionalities overlapped these technologies, it's good for the community because we can all learn from something. So if we're not implementing something here, or if we're implementing something different in Cloud Foundry, or even putting Kubernetes in Cloud Foundry, we can all learn from how we can deploy developments faster. I mean, that's the basis for what we're here for. Node.js originally was used to build microservices fast. You can, the last talk was here. He said, that's what we do. We build stuff fast and iteratively. So you can use this to build your stuff iteratively faster and even use Cloud Foundry as a platform to try to make this all work for you. Prop. All the scripts and code that you showed, will it be available on the GitHub? What I'm going to do, because I didn't do that and I realized I was like, what I end up showing, because the GitHub repo is there. So what I'm going to do is I'm going to make the readme in the GitHub repo. I'll even make an issue for me to redo it and then I'll just list everything that I did. So that way, if you want to try this at home, you can just pull it down and then just start deploying stuff. No problem. Any other questions for Troy? Okay, thanks very much, Troy. Thank you.