 This presentation and all the presentations at this hour block are Introductory presentations and what you're going to find is that we start with the basics and we move to harder topics as we go throughout the day So over next door you this is your introduction to quarkus next door And here's your introduction to Kubernetes and basic Kubernetes components And then as we graduate to the next section we are into more details in the next section more details and more details So we go we drill down throughout the day But I will have to warn you and we figure this out that we probably put too much in the single day These are all the topics that our team focuses on here at Red Hat So Ed said myself and Kamesh who Kamesh will be on the stage after me You know we this is where we spent all our time We think about Java and Kubernetes and how to optimize Java based applications for an open shift or Kubernetes environment How to build next-generation cloud native applications with microservices and serverless architecture So we're gonna we spend a lot of time in this world And so we decided to bring you all of our topics and so you're gonna see things like really advanced tecton pipelines as an example You'll see blue-green you'll see canary and you'll see things like basic Kubernetes and basic quarkus and all that throughout the day So just be prepared for that though So I know that we're gonna have your brains bleeding and melting and falling out of your ears before too long But this is for people who have not had a chance to really be hands-on with Kubernetes Or if you have been hands-on, maybe you didn't understand certain basic components of it And we're gonna show you an introductory Tutorial if you will as a matter of fact all the presentation materials are at this GitHub repo So make sure you have this link all it's all documented and you can do it self-service meaning you can run these materials on your own And I want you to and there were you know So in other words, what would I do at my windows laptop or my Mac? You know, how do I download the right things and how do I get started? Hopefully we've documented them nicely there and the slide deck itself is available at this link down here Bitly nine steps awesome and I can even tell if you on your phone join that link because I can see the fact that let's say they're You know, there's you know, there's 17 people already joined the deck right there So it is a live Google document and you can join right right in real time Okay, and so please do so because we want you to have these materials everything We do on our team is open source because we work or red hat So the slide decks are open source the materials the documentation open source You can give us a pull request or a github issue if you have a question or you want to actually help us fix Something because these technologies are changing all the time So it was going to get into it this nine steps to awesome as a Kubernetes basics class that I typically run over three hours We're gonna do it today in less than an hour. Oh We have Edson coming in on talking to us over here now Okay, we're gonna there's also a follow-on three-hour session on Istio. We'll do a one-hour Istio today. There's also a three-hour Knative We're gonna do a one-hour Knative today. So you're actually gonna get both those things. We're gonna unplug the other speaker They can flip the switch flip the switch there we go. Look at that It would be kind of entertaining if you had listen to Edson and myself at the same time But boy would that be confusing Okay, so just be aware that I do run these classes all the time the next one for the service mesh is August 27th If you're on if you're an arise a Riley subscriber, then you have access to all this material This also presentation was recorded back at devox Belgium in November of last year So if you want to watch it at your own speed Three hours where you can hit pause or better yet since I talked so fast You can actually say you know make this video go slower You don't ever want to take me and go faster in most cases So but it is all three hours in that recorded session. You definitely should check that out And of course, this is part of my overall Bruce Lee themed presentation. This is the first Movie by Bruce Lee fist of fury. It's totally awesome. Okay, so this scene right here with none trucks inside the studio You got to check that out This is back before kung fu people could fly through the air, you know and have magic powers back when they actually had To be real people punching people Okay, if you remember the dip what we did earlier with Bruce Lee the same kind of thing applies I show this evolutionary path in all of my presentations So we won't repeat it now since we did it during the keynote But this is going from the amoeba phase to the slug phase through the fish phase To the four-legged critter phase to a monkey that looks like Homer Simpson So that you also can be a unicorn and do microservices like the cool cloud native Unicorns and the west coast of the United States or the startup ecosystem that exists here in Bangalore as an example You want to be a unicorn too? But this is where we're going to focus our attention over the next couple hours because this is where Kubernetes and open shifters is redhead distribution of Kubernetes and something like an Istio and a k native and a tech and Quarkus, this is where it helps you. Okay, so that's what we're focused on the nine steps include all of these components This is what the nine looked like We won't have time to get through all of them today because it's normally a three-hour session But we will cover the key basics the key tips and tricks to get you started on Kubernetes the key components You have to get your head around so that you'll have access to the Kubernetes capabilities Okay, so you're gonna go through these and you're gonna see some of them one I want to really make sure you think about is the live and readiness probe I want to make sure we get long enough into the session to show you that because that is the secret sauce Okay, this is the live and rent is probe are the key elements of magic that make Kubernetes do what it does well And most people mess them up people forget to set them which is a problem and or they set them incorrectly Which is even a bigger problem, and you want to make sure you understand at least how they work at a basic level Okay, so we're gonna talk about that because if you get those set correctly, then you get the magic of rolling updates You get the magic blue-green deployments and things like that. Okay, so a quick introduction This is the problem space we used to have inside in enterprise IT We used to think about this as old-school Java folks any old-school Java people here in the room Man, you guys are all Python people Well, I'll tell you and you can so now you can make fun of us old-school Java people right so back in the day We would have to build this big old ear file We would have a big old ear file where we would have it is just a zip a tar Right, it's just a big old ear file and in there We would put multiple war files web application archives and we put multiple jar files in there We would even throw in our GIF files and our jpegs and our PDFs and our movies and our video Music files and before us in there because we weren't very smart But that's what we did and we would take that 10 gigabyte ear file and we'd put it out on a machine Which had web sphere web logic or jboss and we would basically FTP it over to that machine and then restart the app server and load it up and five minutes later You have a server that's online serving request. That was the world we used to live in But because I developed on a windows machine and I had a Linux server over there It got a little bit weird and actually let's say we had a salarra server hpux server AI x server So Java was right once test everywhere for these kinds of issues because so the forward slash or backslash I'm on windows works great over here doesn't work on Linux or UNIX as an example or better yet The JDBC driver was a different version in that web sphere web logic versus the one I have on my local Tomcat I'm my local jboss right the jmsq's data source names all this configuration had to occur In order for our application to successfully run on the other side of that and so we would then send an email Some of you probably are still sending emails or how many people on the ops side of the house Normally get a lot ops folks right so you ops people receive this email Where the developers like hey, here's what works on my machine and you're thinking your little windows laptop It's not what we have in production. So you have Java 8. We don't have Java 8 production You have wild fly which is the jboss application server. We don't have that. We have web logic web sphere, right? You have this cool my sequel we don't have that we have Oracle we have red enterprise Linux in production So the email you send actually is not even that applicable now I know some of you evolved from the email to more sophisticated capabilities. You don't have a wiki page and My point is all of that is still out of date and wrong Okay, so the magic and back in 2014 2013 with this thing called the docker file taught us a different way to Identify our dependencies and our configuration We could put it all in a single file and therefore we could check that file into source code depository We have version control of it and we could then have a consistent Environment both on your laptop as well as in production not identical But at least you could get Linux and Linux you could get probably the app server and the app server You could get the basics kind of sort of working the same even if you're running on windows over here and Linux over there So that was a beautiful thing. Okay, that was a beautiful thing by itself But this is where the magic really took off our Linux containers Then we had this other problem though, you know, we don't just run one container in production We run many containers in production. So how do you deal with that? How do you deal with the fact that we want to scale to more than one? We want to scale where there's multiple port conflicts where all these things run on 8080 if you're familiar with Tomcat or J boss and servers like that So as a matter of fact on my one laptop now using Minicube using mini shift I can run 15 things on 8080 because Kubernetes deals with the port conflicts for me as an example So you can win and there's certain things you win and immediately you don't have to SSH in and docker run Docker run docker run you can it takes care of that for you You don't have to deal with the fact that everybody runs on 8080. There's no problem with that either You can run that across 8080 in the cluster And so it also deals with fact things will if things fail it restarts them. It keeps it all running It keeps it all updated. It helps you manage those images It basically helps you deal with running these things for real and production. Okay, so Kubernetes is that piece of magic It came out as I mentioned in 2014 open source We also productize it made it available to the enterprise in 2015 and now we've been seeing numerous customers running tens of thousands of applications on Kubernetes on premise or open shift even in the public cloud Okay so we're seeing a lot of success with this and it's nice for someone like myself because we got in early on this technology and Sometimes you get in early on a technology and it doesn't play out. Well, okay This is a big win for us because we got in early and it turns out the world has decided the way We want to build software going forward is based on a Kubernetes based paradigm not cloud foundry not my Mesosphere not Docker swarm, but Kubernetes is one and when it comes to the public cloud and private cloud So Kubernetes is a Greek term like many things within this world It means Hamilton or our governor and the whole goal of Kubernetes is to manage a fleet of those Docker containers Think of it as like a bunch of ships in the water and we got to manage all of them So it's a container orchestrator. It works again across multiple clouds pretty much any but any cloud It runs out of this point and it's inspired by Google's experience with containers if you actually go back in time back to 2014 2015 when Google was bringing this technology to market They would introduce themselves and this technology as something they were inspired for the last ten ten years working within Google data centers as a matter of fact, they would say we learn we launched two billion containers a week And so we used to say that too back in 2015 with OpenShift. Hey customer Are you interested in this highly scalable architecture Google launches two billion containers a week? And of course the customer in that case was like, well, I got launched 20 So it was really an incredibly scalable architecture written open source written go And of course you use it to manage applications not machines It abstracts the machine away and you basically now have this thing called deployment a pod a replica set to deal with I think we may just lost power for a second. Do we good thing we have batteries. Okay So learned here self-healing or they was playing with lights So we have self-healing we have a horizontal manual and auto scaling automatic restarting scheduled across multiple hosts in the Cluster built-in load balancer rolling updates Which is what really got people excited because in the old days We used to have to buy this thing from big IP called f5 and it cost a lot of money And we'd have to hire special consultants to learn how to configure it and if our topology changed at all We'd have to reconfigure it again get the consultant back And now you get all that for free here in this architecture the rolling updates So the service I load balancing all of that is part of the architecture and you don't have to worry about that anymore It's pretty amazing now. We're gonna deal with this thing called a pod You're gonna hear POD a lot. The pod is your server running So your Python running on flask, which I would not recommend so your Python running on G unicorn, right? You're no JS your quarkus app your spring boot app your web sphere your logic your cobalt your Fortran whatever it is that you have to run that runs in a pod All right, so that's what we use a term for pod and a pod is a group of whales a family of whales And if you notice the logo for the Docker logo, it's a whale So it's not the evasion of the body snatchers reference that people like to think about But some people do refer to this as multiple peas in a pod a pee pod Right because you can have multiple containers in a pod and that's a key thing to understand So if you look at your pod, they can have more than one container now be smart about this Just because you can put more than one thing in there. That doesn't mean you should I've had to deal with some people who are like, oh my whole application is this tomcat instance with this my sequel I'm gonna put since I can put both of them in a pod. I'm gonna put them both in a pod That's a really bad idea Because not only do they share the same IP address and you might have port conflicts because they are in the same namespace Or the same IP address space, but they have the same shared storage, which is ephemeral not even persistent And basically if you restart that pod the storage goes away And but they have the same shared resources and more importantly to share inside life cycle Do you want to restart your database every time you restart your app server? Almost never Right, so you want to separate the different components of your different containers into different pods Just like you would have if they're virtual machines just like you would have if they're real hardware Okay, and you have a network in between we call this client server back in the day So you still have that same architectural pattern, right? My app server is separate from my database server Which might be separate from some other microservice app server and some other component within our architecture So you're gonna always have multiple pods running, but you can put more than one container in so in this early introduction We're gonna see today We're only gonna see one for one one container one pod But as we get to Istio you're gonna see two for two because there's a sidecar container when we get to Knative You're gonna see three for three because there's two sidecar containers So there are very clever things you can do by adding additional containers to a pod But by normal a normal state just do one for one right one container one pod Now we have this concept of the replica set and the deployment The deployment thing of that is your template that is the document you'll create that defines how you want your application to run I want this container image right this Docker image that we did a Docker build to create I want this image to run. I wanted to run two times I wanted to have this liveness probe and readiness probe and these constraints and you basically and these labels and you Identify that document as a YAML file you can check it in the source code repository And then you can throw it back at the cluster when you're ready to run it Okay, so that deployment is incredibly important to understand and the deployment is the actual artifact you deal with you Actually, you're not supposed to deal directly with pods you can you know and you'll see that numerous examples on the internet But by default the right way to do this is to deal with the deployment which creates the replica set which creates the pod Because the deployment is the declarative state of what you want the universe Then you're also going to see these things called services And this is another core piece of magic that makes everything in Kubernetes so much more scalable so much more awesome The service is the stable IP address and the DNS entry for your thing your customer service running as a Tomcat based application Your order service running as a Node.js based application your Pricing service running as a Python application those names pricing order customer are associated with the service not the pod Therefore the pods can be interchanged out from under it at will and that is huge. This is why you get the rolling updates This is why you get the known town time blue green deployment. This is why you get all the magic that is Kubernetes So we'll show you an example of that and it's documented again in our documentation You also have this concept of the persistent volume So what happens if you need to put something on disk and keep it like a database as an example or a Kafka broker Or something of that nature use the PV the persistent volume And so therefore you as a developer declare your need for persistence You basically have a PVC persistive volume claim and then underneath the infrastructure will provision the appropriate storage that Is represented on that cloud provider like I would do it on Amazon You'll see a new GP to storage show up as an example So you basically say what kind of storage you want and then the storage gets provisioned for you underneath cluster now If you're a cluster administrator you do have to figure that out But as a developer you can ignore it you simply say you need persistence and then labels are key elements of this architecture You have to get your head around labels We'll spend a little bit more time on it because this is how all the magic happens in this universe The key value pair and that's all it is. This is a equal be Burr equal Sutter, you know Bangalore equal awesome It's just a simple name value pair, but this is how all everything works in this architecture Okay, so you have the Kubernetes cluster that we mentioned earlier You have dev and ops both of who use the various command line tools or user interfaces graphical tools to interact with the API And then of course API interacts with the database known as that CD and you have your schedules or your controllers Notice there's always three of these because you need a quorum for it CD You want high availability should anything fail. You don't want to lose your cluster So you have high availability so always three and then you have in or mini worker nodes You can have one I would never recommend one But you would probably at least have two and you could have up in as many as you want You know So some people really run a thousand of these things Literally a thousand worker nodes is an example and you can put as many cores and memory underneath those worker nodes as you want And therefore you now might have a thousand cores available to work with you might have you know a few terabytes of memory to work with and that's what you give your that's where your cluster resources are the cool thing about that Thousand cores and two terabytes of RAM is it's now available to all the pods and that's the whole point The cluster now looks like one big old computer if you will from the from the point of view of you know Those pods right they're just going to be scheduled wherever they can fit and the scheduler is clever If it sees one of these nodes that are overloaded it won't add more load to it It'll basically say hey you got any more space available and the cubelet says no no more space available here Right it goes someplace else right so it's tries to be clever about that And there's a lot of different rules and algorithms that are related to the scheduling But this is the intro presentation and we're not going to worry about that just assume It's magic for now you have this concept of labels though that I mentioned earlier You can see we have app cool env prod version one But look at it from this perspective app cool applies to all those wild flies in the entire cluster So we have four pods that match the label app cool for pods So you can actually say cube CTL get pods dash L app cool and you're gonna get four responses back Okay, and then you can look here. Well if it's app cool and env dev then I get two responses back So that's my dev environment But notice it's still version one and version two and those labels by the way are just arbitrary strings you create So don't assume this really means production or really means development or really means one or two It's whatever you decide. Okay, but that's the way for you to segregate these workloads across the cluster. Here's env prod So you just have to be aware of this because you often will segregate your application workloads, which is different labels Often you'll see me use a little version one version two version three just for fun Just because then I can do my blue-green deployment testing or my canary deployment testing You'll see that when we do the SDO presentation and other things as an example And you can see here's version one versus version two So it's easy to basically get these labels wrong and I'll try to show you an example how to how to have a have a fun with them Okay, we won't go on a great detail here, but it's really straightforward You're gonna use the API server to create a deployment create your object. You're here's my document It's gonna write it at CD and of course it's gonna let you know. Yep. We got your we got your declaration We know what you want and then of course the the scheduler It's gonna talk to the cubelets and it's gonna do that DACA run on your behalf Okay, and so you don't have to do a DACA run anymore It takes care of it for you and that's kind of really the whole point of this as a matter of fact It's not even a DACA run any longer depending on which version of the Kubernetes you have It's another way to start a container. So keep that in mind. These are the key commands You're gonna be dealing with so cube CTL get name spaces get pods. You can do a cube CTL run I'll show you that though. It's now deprecated you can expose the deployment So in other words, you can do this to create the pod and then you do this to expose this to create the service And now you have a service endpoint to talk to the pod as an example And then of course you can scale the deployment you can change the image out from underneath the deployment So now I can go from version one to version two and I'll do the rolling update with just that one command but these are the declarative sorry the Imperative ways to use the command line tool and often they're not recommended, but it's a good way to get started That's a good way to just to see if you make anything run So that's why we show you these things up front and then we'll show you the more advanced ways So that's all good. So let's kind of dive right into the nine steps and we'll show you some of these things So we actually in this presentation and in there documentation tell you to use mini-cube or mini-shift and here's why you can certainly spin up a Cluster on Amazon or Google or IBM or wherever you have your cloud provider It could be that inside your data center someone has given you access to spin up clusters But it's also awesome to beta set one up on your laptop now I'll tell you right up front if you only have an eight gigabyte of RAM laptop and a two-core machine Yeah, that might not cut it. Okay, so if you have a laptop that's that's old or that underpowered, you know This for this way. That's the kind of laptop you'd give to a child You want to get a lot a real laptop, okay? A laptop with 16 gigabytes of RAM a laptop with four cores and then you can do some interesting stuff It should be noted you can make mini-cube mini-shift at least mini-cube Which is smaller than mini-shift run on? You know eight gigabyte two-core machine, but it's gonna be sluggish and you're not gonna be to do much with it So matter of fact you're gonna want to play that spring boot application into it and spring boot takes so much memory But you're not gonna be able to run mini pods inside it And you certainly can't deploy Istio which runs as another set of pods or Knate Which runs another set of pods and also needs more cores and make it all happen So if you have a 16 gig machine and a four core machine you can dedicate two of those cores to the VM You can dedicate eight gigabytes of RAM to the VM and now you got some you can go you got some stuff there Right, you can have some fun. So just bear that in mind But I do love mini-cube mini-shift and they're very very powerful again You can also find a hosted solution for this as well and there's many many options, right? You can set up your own raw VMs at your favorite cloud provider and use cube ADM and just configure your own cluster Holy manually, you know, you can do there's lots of ways to get one at this point in time. So let's You're talking about Docker Yes, so here you can see so the point was you can see Docker desktop, which I do have running here It also has a Kubernetes inside it as well. I never spend any time on this one though This is one I don't use so I don't really know if it works or doesn't work Yeah And I always use the mini-cube because it is the more pure way of doing things So I actually have mini-cube here. So mini-cube. Let's see what version I have And you can see I so it says mini-cube version 1.1 even though I put in a directory called 120 I got to figure that out. I must accept the path wrong, but it should work and then I have a mini-cube profile Profile and I use the profiles a lot and in this case it's set for the next presentation So it should be K native and you'll do things like IP So what is the IP address of that and you'll do SSH and you can SSH into the VM and you can do things like PS Dash EF right and she should be noted that when you're in the virtual machine, which in this case is virtual box right here This is the guy that's running. I did an SSH into that virtual machine right now And now I can actually ask the virtual machine. What does it know? So it actually has Kubernetes components running, right? Yeah, let's see here You got that you got that and then you can also once you start launching pods You'll see those processes here, too So that's the important thing to get your head around a pod which can be multi-container But there's containers nothing more than Linux processes that you can see at the host level So you can get into a single node and look around and you can kill minus nine things if you want you're right, but you shouldn't Okay, you shouldn't do it that way, but you can do things like PS You can do you know how much memory do I have? Let me make sure I'm not running on a RAM Let me make sure I'm not running at a disk in here You know what is what's performing and you can see what I've done here it looks like I've given this thing for CPUs and And so because I have enough CPUs and I've given it what eight 18 gigabytes of memory I don't remember doing that But I've given it a lot of memory And then you have access to that virtual machine So it is a virtual machine running a little mini cloud on your laptop. That's what mini cube and mini shift are doing for you So in our documentation as I mentioned we talked about setup and installation getting started You can see we talked about you do need Docker for Mac or Docker for windows at a minimum to get Docker build Right, so you do need to use the Docker build tool to get an image created That's something actually redhead is working on we're working on a thing called podman to give you a separate tool for doing image builds But it's not really available Mac and windows. It's really just for Linux at this point a bash shell is what we always show All our documents in so for those of you on windows I recognize you might be using cmd.exe or pror shell We just have not had the time to document those shell commands So you'll have to translate bash to whatever your shell is or get a bash shell for windows, which there are several Available now, so we always document everything in bash shell. We always use get right jet get right You need to be to get clone things you'll get clone this to repository to your local hard drive So you have access to all this we also love home brew for Mac So you'll install this thing called stern stern is a great logging tool for kubernetes So you can see additional logging capability. We also tell you to install this thing called Cube CTX which allows you to change your context from one cluster to another more easily I'm gonna I'm gonna point a cluster a versus cluster B versus cluster C It also has a tool in there called kumin s for basically pointing to a different namespace within a cluster So I want to switch your namespace aid and namespace speed namespace C It's really nice And of course mini-cube mini-shift and you will have to have your command line tool cube CTL cube control Cube be cuddle lots of different names for this thing. So some people call it cube control. Some people call it cube CTL some people call it cube Cube cuddle like cuddle, you know, we're gonna cuddle each other So the cube controls the proper name for it, but everybody kind of messes that up You'll hear it different ways and also there's OC if you're dealing with a mini shift You'll use the OC command line tool for at least log in so by default open shift one difference between home shift and the Kubernetes It's secure by default and therefore you have to log into it You're right and therefore you need an OC log in command We at Red Hat by the way are the second largest contributor to Kubernetes overall right outside of Google themselves And we also work on these command line tools So cube control is a tool that we are engineers for as an example So it's funny that we have the two different command line tools available But they're they work almost identically you can interchange them, you know in most cases Here's how do you get the cube CTL client tool different ways to install it? You can brew install it or you can curl it and get it downloaded and get a mini cube cluster get a mini shift cluster All that's here. We are if we're doing it the three-hour version of this people are doing it with me But we don't have time for that now get your environment variables set up correctly. So you have the right mini cube You can get you do a mini cube version. I just showed you the SSH, but look right here This is how you get their profile configured and the right memory and CPUs dedicated to it So two cores and six gigs of RAM for this elementary stuff gives you plenty of headroom to run a number of pods And then of course you would set your VM driver. See I set it to VM driver virtual box I'm only using virtual box because it's available in every platform. It's available Windows. It's available in Linux It's available Mac. So that's what I like to use a lot of people love hyper kit on Mac They swear by it a lot of other people love VMware because that's what they have you might have hyper V on your Windows machine You can you just have to look for the various options there that make sense to your platform But I use virtual box everywhere and you can also see you can Specify which Kubernetes version you really want to use so you notice I set it to 112 here because that is the most common Kubernetes available in production environments around the globe in other words That's the you'll you'll find it at Amazon. You'll find it at Google. You'll find it with red hat You'll find it based on your different vendors. Yes, there is a Kubernetes 114 and might be in 115 I haven't looked in a couple weeks, but no one actually supports the latest greatest nobody So you have to decide which version you want to be on based on what you have from your vendor If you're an Amazon person's 112 or 113 as an example if you're Google I think it's still 112 or 113 no one really has not too if you have 114 at this point Okay, so you want to set that and then of course if your mini-shift you want to enable the admin user again You want the admin user capability and then you can see it says start so that's how you started up And that's what I've done I have start running and then you're gonna have like here's my virtual box and you can see I can run mini-cube mini-shift Simultaneously side-by-side. I've done that on many occasions Not a problem there the only thing you'll want to be able to do is switch your cube config environment variables So you can have multiple clusters running simultaneously on one machine. You don't really want the cube configs to overlap each other So you'll actually see that in my setup here. Let's see here Okay, let me open up. This is the directory. You'll clone nine steps. Awesome You'll clone it and you'll see that I have this this thing here Often not set context Where is it start mini-cube? So there's the start command that we just saw there that I have here as a shell script Oh, it looks like I don't have it set here And a lot of my other examples You'll see That I actually have a very a thing I put out there So let's look at Native here, which we'll talk about in the next section. Let's see here. You'll see this is set in V So this is a very common thing you'll see in a lot of my directories So you should be aware of how to configure your path and more importantly how to configure cube config path So you can keep your world separate I keep the cube config over here for this profile cube config over there for that profile cube config over here for this cluster That cluster only because I like keeping the world separate And that's also filed by the way you can blow away and they'll get recreated and often you'll get you know The system will get confused They can't find the cluster you're talking about because your cube configs have gotten confused So this is a very advanced technique, but be aware that it is very nice I wouldn't well, maybe I should say if you're new to this Maybe you shouldn't set it just let it go to the default location on your hard drive That's documented by the way in our document. Also the cube editor trick is very very powerful I use that a lot. So right now you can see it's mapped to VS code right there. And so if I go back to my tool so echo cube editor You can see I have it set here, but Dude, I don't have it set here All right, so it's not set here. So what this tool allows you to do let's see cube CTL Get deployments All namespaces. I'm just looking for something that is in this cluster and I don't have much here I have QA native installed and Istio installed But let's let's just play with something cube CTL edit deployment And then we're going to go with auto scaler Da-da-da in Knative serving so the namespace When I do the edit here Knative Serving There we go, you notice it puts me in a vi like tool So that's that's what happens if you don't have that very environment variable set And so you have to know how to escape vi, which is incredibly hard Right. It's one of the most common questions on stack overflow. How do you save and escape vi? But if you do this same command over here Okay, with that environment variable set it throws you into visual studio code And now you can edit it with a editor that you might be more comfortable with and for those of you on Windows You might want to put it in notepad, right? I'm not sure if that would work for notepad, but never tried it But he could and then you just have to save it close it if you don't you have to close it Don't forget to close it and then you'll notice that command is locked until you actually close it Okay, and then it persists the document back to the actual API server and back in the LCD So what I just showed you though is a way to live edit in-memory documents in-memory declarations And you don't want you do that for experimentation sake, but you don't really do that for a real production environment, obviously Okay, you would update the actual file on disk instead All right, so this is all the kind of tools you would have you can see different options here get pods all namespaces Do remember this all namespaces trick because you will have things running that you don't know are running and In this case since I've installed Istio and Knative into this Mini-cube by default you don't have these okay. I'm also prepared for the next presentation You can see there's a ton of pods using memory and CPU on this virtual machine already So that's why I have to give it so much memory and so much CPU because there's a ton of things running already inside that system And a regular mini-cube you only see a handful of things running But it should be noted that Kubernetes deploys itself as Kubernetes In other words all the things that you want to run are running also as pods and as deployments within Kubernetes So that's something you have to get your head around a little bit like inception you know the fact that all this is here and So just be aware of that. I showed you this SSH command already And you can basically if you are mini-shift you will need to log in and the default user in the in-password is admin admin And then you can use these tools like cuba ns and cube CTX to manage You know switch context switch namespaces and so let's show you this. Let's see. Let's go here run this command So I'm gonna let's see me. Let me see what namespace. I'm in It doesn't I don't have a default namespace meaning one that is sticky. Let's make it sticky and it's just use default here Cuban s. All right. So it's now highlighted in yellow So that's what I like that tool because if you don't use cuba ns You now have to specify the namespace everywhere you go. Okay, this makes it sticky The deep what is my namespace I want to work within? Let me run this command now here So you can see it's cube CTL. That's the tool. You're always using run Hello, mini-cube is the name of the deployment. It's going to create it and pull this image from the Google repository GCRIO is the Google repository. You might see Docker IO, which is Docker hub You might see the red hat registry here. You might see some other registry Quay IO is also very popular now. You'll see tons of that and you'll notice that when I hit return here and hit run It does say this command will be deprecated or is deprecated and may be removed So just be aware of that because you're not really supposed to use this tool, but look what happens. Okay, cube CTL get pods There's now a hello, mini-cube. That's in container creating status In other words, it's bringing that image up and making it run and none of words. It's doing its Docker run Okay, if I do cube CTL get all there's actually a bunch of things here. Okay, let's see here All right, there's a bunch of things here Some of these you can ignore because there's part of Knative and which is again a more advanced concept But there's the pod you can see that it's now one for one one container in that pod is ready If you see zero for one, you're not ready So one for one is critical you want to see it running here You have a service which again, this is a standard service You see in a mini-cube this service has nothing to do with what we're dealing with here right now And you have a deployment and you have a replica set so that one run command created my deployment Which created my replica set which created my pod Okay, so know that those three things were created by that one run command And now you have to if you want to tear it down you have to clean these things up appropriately Okay, so with that run command we now have a deployment. We have a pod And we can even do things like this. We can say cube CTL get pods again All right, so there's my pod. I can say cube CTL exec IT Into that pod been bash so I can essentially SSH into the pod and let's see here curl local host I forget what port it's running on. What did they say to say 8080? Yeah, so that's an engine X running inside that pod That's all that's in there and it's running on 8080. You can see I can interact with it But I can't interact with it on the outside Because it all I have right now is the replica set cube CTL get RS. I have a deployment And I have a pod that's the three things that were created So if I want to interact with it for real, I would have to do a service So here let me create a service and on mini cube or mini shift you often use this thing called a node port And so we will basically do that expose the node port if I say cube CTL now get services There's a service. Hello, mini cube type node port and look at this number right here That's the magic number. That is the external number mapped to the internal port So 8080 is the internal one, but what's visible to the outside world is this thing called 32 7 3 1 and I can say mini cube Dash P and k native is what I call the profile IP. It's 192 168 99 101 I can say curl 192 168 99 101 colon and it was 32 7 3 1 And now I'm interacting with that application server that engine X Okay, so and that's from the Mac the host or the Windows machine into the VM into that specific pod that's running So that you just have to remember these techniques because it can get confusing people can't talk to the thing They just launched they forgot to create the service or they forgot to create the service for the node port Because if you just use a regular load balancer, then you got to go configure a load balancer, and that's very doable You can configure the load balancer there Let's see if this green comes back online It's thinking about it. There we go And you would do that if you're dealing with a AKS ICS GKE Eks anybody who gives you a cluster has their way of doing what's called ingress a load balancer and that inbound You know, how do you get from the outside world to the inside world? So just bear that in mind that is unique per cluster. Okay, OpenShift does it a certain way using HA proxy We call that a route. All you do is simply say expose a route. You now have a routable endpoint Mini-cube you would use node port like I'm using here or you would add ingress gateway capability to it Which you'll see some of that later or in the case of GKE KS They'll have their own solution for that. So you just have to keep that in mind, right? The way you get an external load balancer route is can be tricky You can also use something like this, right? It'll tell you the service URL As well and our OpenShift has something similar and mini-shift and then you can describe your deployment. Let's do that cube CTL describe deployment Hello, mini-cube and So you can see a description and it's important to know this command so you have cube CTL apply create replace run These are commands to create things get things running replaces to update delete to remove it Okay, so you have your typical create read update delete commands But you also have describe tell me about that thing which is very powerful And this is essentially pulling out the document from etcd as to what the description of that thing is in other words It's pulled the yaml out of the database. Okay, you can look at it here You can basically look at it in various ways, you know json form yaml form This case is what describe shows you you can kind of see well, it's set up for rolling update It is and here's the image that we're gonna be using where's that image declaration? Yeah, there's the image the port that is exposed 8080 You'll also see if depending on if there's some errors that occurred, you know You might see some error messages here based on the fact that couldn't deploy it or something like that So you might see that also But that describe command is very very powerful and then of course there's the edit command that I showed you earlier Okay, so you can actually edit the document and so now I want two replicas hit save close Watch QCTL get pods and you'll see a second one has come online So that declarative capability is very very powerful. So I could do the same thing over here QCTL edit deployment Hello, mini cube and now I'm in VI mode. So let's look for replicas again. There it is And then I let me go talk to it. Oh, yeah, there we are. So we found it. Here we go Exa I Whoop, I didn't want to do that. All right there and we want three save WQ bang and then you'll see a third one come to life up top there So that declarative concept is really what makes Kubernetes so awesome. You say I want three of those now I want to change the image information and it'll just keep trying to make the world the way you've asked it to look like okay, so if I can come in here now and Change that again replicas and let's go here I keep it in the wrong command. All right. There we go one. Let's make it one and Then you'll see it take two of those away because I said you don't want three three of those anymore So you see they're terminating and it does take a little while to clean up a pot because it is a real living breathing thing Right is trying to get it to shut down gracefully at this point So just keep that in mind Very powerful stuff and that's all here in step one. Okay, so let's talk about the next step The next most important step in this and that is besides installation and getting started It's how to build an image and so the way you build an image is also very important So some key tips to take away from this session, you know Besides what we're showing you in terms of the tutorial part of it is your base image You need to be very specific about where you get that thing from okay In other words, where did it come from? Who created it? What did they stick in there? Did they fix all the critical vulnerabilities or not you want to make sure your base images are coming for a known vendor you trust Somebody who's helping you create this thing correctly and I'm not just saying that because red hat provides these things But most people just pull these from Docker hub and Docker hub has numerous base images in there that have all kinds of critical Vulnerabilities in there You also don't know what back door so am I to put in there as well because you got to keep in mind when you load that base image It's going to run the thing in Linux that they want to run The stuff in the path the stuff that boots automatically So if they want to put a Bitcoin miner in there and you want to run it You know, they probably could do that as a matter of fact for those of you who are actually manage infrastructure at scale Have you guys had to chase down a few Bitcoin miners on your infrastructure and shut them down? So I see you're not in your head. This is what happens to people running real operations at scale You actually have people Bitcoin mining on your infrastructure and you're paying for it. I Mean it happens for real if we have people red hat. That's what they do chase down Bitcoin miners on our infrastructure Okay, so you have to figure out your base images know where it's coming from what registry is coming from You notice I said Docker. I owe quay. I owe GCR. I owe There's also a container catalog at red hat those are the pristine images that we ensure are clean and happy and actually now that we're here Let me bring that up Let's just to see if we can get red hat Container catalog because this is a question. I got a lot recently. So let's go ahead and open that up because Someone was asking well, how do we know that the image even come from red hat is any good? I'm like, okay That's fair for a question So let me go to Java here since I'm a Java person and you can see right there Our base Java image was updated two days ago It has a health rating of a and then you can get more information about what was changed about it things like that So we actually are trying to publish real good data about what you know The most recent build of that image things of that nature and so that is the one we recommend for people doing open JDK base image as an example Okay, so just be aware of that Also, don't you have to craft your docker file? You then have to do your docker build and then of course if you are on a remote cluster So if you're using GKE AKS EKS a remote cluster a remote open shift meaning one not on your laptop You do have to do a docker push in addition to a docker build Right you docker build to get the image and then you docker push to get the image where you needed to be So that Kubernetes can pull it Okay, if you're on mini cube or mini shift, there's a trick to it and I'll show you what that trick is But once you've got your docker image created you didn't cube CTL apply your deployment you cube CTL apply your service You could put all that in one file I never do I like having them separate because I like you know tweaking them separately But you could put them in one file and and put those in source control and then you're good to go So it's that four-step process figure out what your image is going to be get your docker file Docker build apply apply. All right, so just think of those basic steps. So let's see if we can kind of walk you through that All right Let's go over here and here and I'm gonna and by the way this again is all documented and step two And I'm going to show you my version of it right now But it does walk you through these basic concepts. The first thing you'll notice it says create a namespace Okay, so I'm going to create a namespace cube CTO create namespace and there's different ways to create a namespace I'm going to call this my space and I'm going to say cuban s my space so it's sticky now I'm inside it to cuban s so my space is the active one. I don't have to specify the namespace any longer I'm in the spring boot directory. Let me look at the code for that. Let's bring that editor You can kind of see this Java code basically is very straightforward It grabs the environment variable of the host name. This becomes important later like what is this computer that I'm running on All right It has a simple counter that allows me know if the JVM is being started or stopped again because this is You know, it's just a stateful variable So by the way when people say oh, you have to be fully stateless you do not you can be stateful as well You just have to know what you're doing with that That's why actually one of the big wins of Kubernetes is you can do stateful applications in addition to stateless applications And so I can say look at my greeting. Let's actually change this to namaste Okay, and then you can see it says from spring boot right there And then there's some other request mappings get tell me about your resources actually consume all your resources Just be aware of that. So there's my code. So as a Java programmer I'm going to say maven clean compile package. That's what us Java programmers do all day We go get coffee while this build is occurring, right? I make that point because if you see it quarkus, you don't have time to go get the coffee, right? Quarkus is too fast, but let me go here. I can say Java dash jar Dun-dun-dun target and then boot demo. I'm gonna Java dash jar So that basically runs the fat jar. So there it came up in 2.6 seconds curl local host 8080 Let's see how that works. All right says namaste from spring boot notice the variable there one two three goes up Says unknown because it doesn't know what the host name is right now, but that's okay Let's that's our code. So we think our Java code is ready, right? So that's what the developer does now they gotta figure out. Well, how do we build the image? So I'm gonna use this Docker file which actually is not a good one But it want we're trying to make a point here. So it's pulling from Docker hub So if you don't see a registry name up there, it's going to Docker hub Okay by default unless you're on red on price Linux 8 which case it defaults to our registry before it goes to Docker hub Okay, so you'll have to know what the paths are if you will there know where it's coming from But by default Docker hub is the default Environment variable there to name the jar file the fat jar and it's gonna copy it to the right directory And it's gonna do a Java dash jar in words what you just saw me do on the command line The Docker file is doing too. Okay, so we're trying to make this super simple. So you're gonna have to build that image Cube CTL What sorry Docker build dash t nine steps. Awesome. Let's make sure we spell this correctly I've often misspelled it because it's so complicated my boot v1 and dot So we're doing our Docker build there now notice it did a Docker build really fast because I've built I've done a build before all right. I'm cheating here. There's nothing being downloaded at this point time But watch this. This is very important to get your head around Docker host See that environment variable It's set to the same virtual machine IP address as the whole mini cube is sitting in right now And the port number for the Docker Damon 2376 is also configured This means my Docker build that I just used is actually talking to the Docker Damon in mini cube Not the Docker Damon running at Docker for Mac. So this is an important tip to get your head around So in words, it's not talking to this guy It's talking to Mini Cube here. So to make that point if I do a Docker ps Look at all this stuff that's running Okay, and actually let's let's make let's do this done done Docker host. Yeah, so that's set so it's unset Docker host and Now if I do a Docker ps here Okay, it's well, it's actually so it's actually trying to talk to I know what the issue is There's a couple other environment variables there So mini cube dash P K native Docker ENV Let's see what other environment variables I had. Okay, so there's Docker host and Sir path. I think I need to unset all those to get it to go back. So unset and Then let's unset this one too. But this is the trick I'm showing you the on the way to undo it if you need to so Docker ps Yeah, so you'll notice Docker images here. It's different than my Docker images here Okay, so this one has crept nine steps That's the one we just created but this also this Docker images has Istio in here it has K native in here Let's see this one crept K native no K native No Istio Because there's two different Docker daemons. I'm talking to so I ran this is kind of hard to get your head around But this is very important that you figure out because I'm working with the Docker daemon and mini cube mini shift on my local machine Which is fine because it's okay that that port is open on your local machine It's not okay to open it up across the internet You don't want people doing Docker build across your cluster on the internet or Docker run on your cluster across the internet So any operations team was going to close that port down That's why it never works on a remote cluster from your machine, but it works fine on your machine Okay, if you again if you don't have a local cluster like we have right now You would do a Docker build it on your local Docker daemon and then Docker push it Someplace else right Docker tag and Docker push So in this case, I don't have to push it someplace else. It is there and available to me You know so here you can see it says crept nine steps That's the image I created nine steps also my boot and if we look at our docs here Listen, so here we did that already we did that already and now you can basically do a Oh, it actually is always a good idea by the way to do a docker run after you do a docker build Just to check that the container is happy. We know the JVM was happy. Let me see if the container is happy Okay, there's our spring boot coding up there It took about four seconds to start a hello world spring boot, but let's do this now It is curl 192 168 99 101 101 80 80 and there I'm interacting with it Now you'll notice I put the IP address of the mini cube there not local host anymore Because that's Docker runs happening in the Docker daemon inside that virtual machine. Okay, that's another thing that tends to confuse people So but you can see at least that the spring boot application is running and this looks pretty happy there So now we need to make it a Kubernetes pod So if you look here, there's a directory called cube files my deployment my boot deployment YAML. Let's go look at that file Okay, let's open this up and So here it is I was apparently looking at it earlier and you can see it that is very straightforward It basically has some labels and metadata it says I want one of these It has a how to match the selector for when we actually do our selection of this thing And then really the part that matters is this one called spec with containers with a name of the container and the image That you want to load that container and the port that it exposes So very straightforward kind of what we saw with our Docker run command because this is the part that matters to us But this is the document. We're going to throw into our Kubernetes cluster. Okay, and then you can see right here QCTL create right there. So let's just do that QCTL and you can use apply as well QCTL create my boot Deployment what was it called? Yeah, dash deployment Okay, come back. There we go deployment dot YAML and we go and watch Cube CTO Get pods you notice and I don't specify the namespace because I'm used to cuban s trick Right, otherwise you'd have dash and namespace everywhere. We everywhere. We're doing every command. All right So it looks like that guy is up. So then we can come back to our document and we'll create our service So you can kind of see oh, and yeah, let's actually do that real quick now. There's not only the pod Cube CTO get all And you can see there's the the Knative stuff which you get ignore But there's the deployment the pod and the replica set those three guys were created So we created the deployment object which created the replica set which created the pod and again The deployment object was defined here and you notice the kind deployment right the kind is very critical there to know what type you have And so let's go back over here and let's create our service. Okay, so it's my boot dash service Cube CTO apply dash F my boot Service and there we go now. We have a service coming online So it's cube CTO get all again, and we should see a service right there and notice it is there's a 31 712 That's that node port again. So that's how we'll talk to it. So curl 192 168 99 100 101 and it was what was that number? 30 there we go. Let's get that number and Let's see if we can curl it and now we're talking to it as a pod Okay, so we can say while true. Let's do this somewhere else. Actually, let's do this here while true do curl get this curl command and Sleep one done did I do that right? Alright, so there we go So it's curling now hitting that server and you can see based on the number going up We're dealing with the same JVM instance that pod is still up. It's still alive, right? So let's watch this now. So let's go over here say watch QCT. I'll get pods Okay, so there it is. Let's have a little bit more from this QCT L QCT I'll get deployments. I'm just switching windows here I do this a lot so I can see different things happening in different windows. So QCT. I'll edit deployment My boot. Let's see if that'll bring up visual studio code. Let's change it to two replicas I'm gonna set save close and then watch what happens here There's a second one coming up and notice those error messages you to see them This is important. Okay The reason you got those error messages is because that deployment does not have a liveness probe and readiness probe This is why these things are so critical I did not specify those two probes and you'll I'll show you what those look like before we're finished here And because it didn't specify during this rollout the load balancer the service didn't know that that pod really wasn't ready yet That JVM was still booting that spring boot was still loading that architecture was still coming up And therefore it was rejecting requests to it if you have those two probes set correctly It won't send traffic to a pod. That's not yet really really ready. Okay, so that's why it's important But you can see we are load balancing across the two of them pretty nicely, you know It's working. We're load balancing if I come in here and change it yet again Let's go back to one Okay, let's go to replicas one save close All right, and we'll you can see one's terminating now so you can see it goes to the one that's remaining So the load balancing you get for free all I'm doing here is a curl command That's actually pretty awesome all by itself again if you had to buy a big IPF 5 router and all that other stuff that came with it Right, this is pretty amazing to kind of get this capability. So out of the box Okay, so that is basically our step two you can kind of play with it You can and look at it and so there's all the commands you can go in there and update the code And of course do the maven package again do your docker build again And and then you can roll out that new image actually. Let's just do that real fast Code Go here my controller instead of namaste. I originally was born in Hawaii, so I'll make it say aloha Okay, I'll do a maven clean package to basically get a new fat jar created So there we go. Come on. So we can do Java dash jar target And it was called my boot something. Well, I can't remember. What was the name file a boot demo, right? So Java dash jar target boot demo run that Curl local host. I want to test my code as a good Java programmer Yep, it says aloha now fantastic. I'll do my docker build dash T nine steps awesome My boot call this version two dot Okay, so now it's going to do a new docker image for us a docker images Grip nine steps. There we go We have one and two now and if you'll notice in the docs we say how you how do you do the update? We're going to do the rolling updates So we're going to basically tell the deployment to change its image from v1 to v2 And as a matter of fact, let's do this kind of a more fun way Where'd my edit go? Let's see. Where's the image? Here's I can just do it this way to Save now again, you're not supposed to do this in real production environment But Cube CTO get pods Okay, and you notice it did fail again because it didn't know it wasn't live and ready, but it did change It's now in aloha if I want to change it back to the Namaste version one. I can Alright, so where to go? Here we go back to version one save close and It fails again. Okay, so the last piece. I do want to show you is the live and readiness probe Let's see if we can fix that behavior because this is probably the most fundamental thing to understand So we're going to actually skip a few steps here to step Seven live and readiness probe and so the live and readiness probe is just another piece of yaml that you add to your deployment Descriptor and all you got to do is simply specify where you want the probe to call into so let's actually look at that Let's go here Yeah, let's go here This is where I had the right directory and so this one has it right here Okay, and actually let me comment this part out since we skipped that step We don't have the configuration, but you can have a declarative configuration environment But look here. We have the my boot version one 8080 just like we had before so that part didn't change at all But now we've done some additional things We've actually constrained its resources because we don't want the JVM to suck up all the memory on the machine Which it likes to do okay, literally if you guys have deal with Java It'll eat all the memory on the machine if you're not paying attention So we were going to contain the memory We're going to contain the CPU because it can't grab all the CPUs too because it will by default And when you do by the way when you deploy your Apache Spark or your Kafka, you'll see it happen, okay? So you want to know those can how to configure those resources, but then you see the liveness probe and readiness probe So the liveness probe is the first check. Are you alive? Yes or no, and if you don't say yes to this probe You're not alive as a matter of fact it will try to delay the probe So it'll launch you then delay 10 seconds like said here then it'll ask you the question Are you alive? It'll it'll do this every five seconds and it'll time out after two seconds So if you can't respond with an affirmative in two seconds It would assume you're dead and this could happen and here's how in Java land if those people have been in Java land You might have written pieces of code like this where you basically pulled a connection from the connection pool or another thread from a thread pool And you forgot to put it back That happens a lot in Java So you won't be able to respond to this probe Because all your threads which in this case you can see it's just going to root It's just learning the root, but you could have it a very specific URL here to top talk to your HTTP based in point And by the way, it's not just HTTP probes There's also just file system probes if you have a for training application or something like that where it's looking for file to show Up and expecting a file to come back you could do that too, but this is actually very nice for microservice restful microservice Talk to the API if the API does not respond It's probably locked up. It won't respond to users either So shoot it in the head and restart it someplace else, okay And the readiness probe is use the second phase down there And you notice this map to a slight different path. This is very powerful You now can write any code you want in that probe behind that command to do what you will So if you look at the actual Java code again, there is a little bit of logic behind that one slash health You can see it right here Where'd health go? Well, I can't find it there So in this case, it's very simple straightforward It's returning a 200 but you could have logic here You can have logic that basically says connect to my databases warm up my caches verify the JMS brokers talking to me My Kafka connection is whole then I am ready. So prior to that moment. You would be returning Something that's not 200 right typically it's a 500 right that you would return So you're gonna return a non 200 until you are ready when you are ready you return a 200 That means the load balancer can now talk to you. So let's see if we can make this work but it's actually come in here and Watch cube CTO get pods Cube CTO replace Dash F and then this is my boot What was it called my boot? Let's make sure we get the file name, right? The one we was playing with here my boot deployment live and ready so my deployment Live ready replace that and you'll notice the old pod is being torn down The new pod is being created because we have a new set of parameters for dealing with that pod Let's see here. Can we get our while loop back while? Let's run our curl command notice We did you see those errors we got right away because and you'll know if you also were paying attention You also saw that it took a while before it said it was running because now it's doing the checks In other words, you won't the first time it was running immediately because it did no checks when it really wasn't ready So now it's actually you will only call it when it's ready So if I come over here now and let's see if we can do my little dance I did earlier So let's actually go to version 2 Save close. Let's see if I did that right. Oh, and I know why I have this error here. Let's do this Let me show it now. Let's see if I can demo this demo this correctly. Let's go to two replicas Okay, so we're gonna get two of these guys up if you want to play the rolling update game You can't have one thing you got to have two things So we get two of those up and we should see the load balancing back and forth All right, so there we go load balancing between the two of them and remember that host name See the host name here and host name here. Okay host name here host name here In other words, the host name is the pod the pod is the computer for that application. All right, that's the way to think of it So now we have two of these Let's see if we can actually Do this again. All right, so let's come over here to view one Save close. Let's see if it works Okay, so we go from we went from a loha We're still in loha and you notice actually it's trying to bring up the new ones But it what it's doing is it's not tearing down the old one to the new one is ready One of the new ones is ready and there we go. It's namaste. So now you get a clean rolling update So that's all we actually have time for but I want to make sure you see the live and act allowed and readiness probe Because that is critical to doing this correctly You can also again watch all this from a video replay standpoint This was recorded today as well as I mentioned that there's a recording here as well You can also join me for a future Riley live presentation But that is your quick introduction to how to do basic Kubernetes things. Is that cool? Okay, make sure you grab access to the slide deck at bitly nine steps awesome I make sure you grab access to the GitHub repo and you can get clone that and get started and again All the documentation is out there. You can see people will file GitHub issues for me and pull request and I have I need to go look at them So feel free to do that and I got to go look at them, but that is your introduction. So thank you so much for that