 How's everybody doing? How's KubeCon going? I didn't expect that we're going to have such a big audience today. Thanks for coming. So this is a tutorial, a one-on-one session. Set up your laptop for Kubernetes and get productive. And just straight out of the back, I just want to ask, do you guys like to be productive? Do you like to be driving efficiency? And be cool and stand out from your colleagues out there. That was good. Yeah, because we've actually been surprised. We've seen a lot of people, even at this conference, they have a nice laptop set up with KubeCuttle. And the way they're doing is basically running commands in a poor shell, not productive, just trying to type. And I've been at the showrooms, booths, those demos, boring, people typing slowly. So same here, if you like to drive an old car, that's fine with you. But there's always better choice, right? You can get something like Tesla and get an autopilot, nice screen, like an iPad in front of you. And everybody around you will be like, wow, this is cool. And so you guys here at KubeCom, probably also getting a lot of stickers at the booth. If you didn't get it, I think there's still a lot of them out there. But there's something inside that you have to fix on your laptop side. So our tutorial is pretty much starting for you here, you are a young sky worker trying to figure out the nice KubeCuttle commands and try to switch namespaces, context, and see what happens, right? But we're hoping by the end of the session, you're going to become experienced Jedi. And we'll be able to surprise your colleagues at work and get productive. Sounds like a plan? All right. So my name is Archie. I'm CNCF ambassador from Canada. I'm organizing meetups across Canada 7 series right now. And today with me, one of my co-organizers from Quebec. Exactly. Hello, everybody. Sebastian, software engineer, involved in the community, making some PRs, and harvesting some maintainers. And I guess we have a lot to offer today for you. Yeah, so actually, Sebastian is give this talk and it was very popular, a lot of people liked it. And he also maintaining one of the projects that is around efficiency, which is, I think, KubeColor. I actually bring back KubeColor to something working efficiently. If you don't know that, you will learn during the conference. So just sit and wait. But yeah, I had this talk, it was like 15, 20 minute talk, just presenting the tool and how to install it in the URL. So this is totally different because we're going to do a hands-on, we're going to demo everything that we're saying. And we will try to give you everything so you can replicate either here, if the Wi-Fi and the network allows it, or later at home, maybe not at the hotel. Yeah, but at home, yeah. And I apologize, we're from Quebec, so our accent is a little bit weird, but we're with KubeColor. Thanks. So what we need for this tutorial and unfortunately, we just heard that, you know, the previous tutorial in this room had some issues with Wi-Fi, so we're hoping for the best, but it might be a complex problem for us as well. So what we're looking here is we want to have your laptop with pretty much shell I term set up, and we're going to be playing with Kubernetes cluster. And we're not going to go inside of the Kubernetes control plane, we're not going to talk about API servers, schedulers, controllers, and whatnot. And KubeLit, our talk is really going to be focusing on the laptop itself, talking to Kubernetes API server for the, you know. Archie, isn't that the time to ask the audience who knows about Kubernetes, the API server, the scheduler and Kubernetes internals? Who knows that? Who feels like very confident, maybe not expert, but very, very confident? Oh, a few hands, not that much. So, yeah, so like, who is like never run KubeColor command in the life? Do we have? Never. Okay, so that's okay. You will learn a lot then. All right, so that's good, yeah. So yeah, so for the requirements for this tutorial, we tried to, you know, cover as much platforms as possible, but mainly we were doing our preparation on the Macs, and so this part has been tested. Well, for the rest of the part, we give our best. Our tutorial is open source, so if you see any issues, we are happy for the PRs. Yeah, whatever we present here is usually focused on Mac OS, on ZSH, but it's working on Linux, it's working almost the same on Windows, even if we don't explain everything for all the laptops and all the OSs and all that, you will be able to replicate just by searching in the docs, usually. So, good show of hands, who is running Mac OS here? All right, that's good. My turn. Linux, okay. And then Windows, all right, so. And then Windows on a corporate laptop where you can do anything. Nobody, that's not possible. All right, I think, you know, it's still, there will be an experience that you might not be able maybe to run some of the commands, but you can go check in the documentation maybe, but there's, obviously there's some, you know, if you cannot use ZSH, there will be potentially some of the features that you will not be able to use, but yeah, like we're hoping, you still gotta learn a lot. So, what we basically need, the key requirements is, and we put it in our tutorial, we don't, we cannot provide you any Kubernetes clusters today. We're hoping you have one on your laptop or you have one on your cloud provider and whatnot. So, pick any Kubernetes cluster that you want and you know, you can install all of the tools that we're gonna be talking on your laptop and they will be communicating with this Kubernetes cluster. So, just wanna ask like, who has any Kubernetes running on your laptops? Okay, so, good. And who has like, cluster in the cloud that they can connect and potentially play around? Sounds good. I don't know for the rest of the folks. I just think like, we have all the steps to install Kubernetes cluster on laptop is just, this potentially might be problematic with the Wi-Fi. So, I don't know, I would, I don't know what to say, but we'll see. Yeah, don't blame us. Yeah, don't blame us. Blame Cisco. All right, so, if you wanna follow tutorial on your laptop and you know, if you wanna reproduce steps with your colleagues, friends, so we have a website where all the steps are and you can pretty much, do it at your personal time if you didn't finish some of the configurations. So, we'll give a few minutes to scan the barcode or I think this is a short link that also brings you to the website. So, we'll give you a few minutes just to make sure everybody's connected. So, you can follow along. If you go on the website and start to do everything from the website, you're not going to follow us because there's some few things we're not doing, some things we're doing a little bit differently during the demo, some things that you shouldn't do because it's just for demonstration purpose. So, keep following us, but still, we're going to go on this website and copy and paste all the comments in our demonstration. All right. Everybody's okay with the QR code or the URL? Let's wait a little bit more. Who has a problem to access the website? Doesn't work the QR or... Okay, good, all right. Okay, sounds good. So, we can carry on and we're going to start the first chapter. So, you are young, future Jedis and you want to come to KubeCon, maybe learn some Kubernetes, understand how things are working. So, this is a one-on-one tutorial. So, we decided, you know, there's probably a lot of people who was just starting with Kubernetes. So, we're probably going to touch up some really basics, but we're going to try to move forward fast and then once we explain the namespaces, contexts, we can move forward and show more advanced stuff. Does that sounds good with everyone? All right. Okay, so, first we're going to talk about new SSH, which is our KubeCuttle. So, for the sysadmins out there who probably, you know, never connected to Kubernetes cluster, I think this is a very good analogy. You know, back in the days we had servers, we want to connect to them and we used our SSH, you know, provided our credentials, provided the address of our server and then we'll be able to connect them, you know, manage our application. And then, you know, obviously, technology is like Ansible game where you can specify many directions and provision many servers. So, I think there's some analogy here with Kubernetes. So, KubeCuttle is a CLI that works with Kubernetes, communicates with Kubernetes. Kelsey Hightower says it's a new SSH and I agree with him. So, the idea here is that you have a KubeCuttle installed on your laptop or maybe on the cloud shell and through the HTTP you will be able to talk to Kubernetes cluster, provide resources, deploy things, delete things and all the communication is going through Kubernetes API. But the first step that you probably need to do is to install KubeCuttle itself. So, this is pretty much not a problem. So, we have the steps in the lab. So, I think we have the already installed, right? So, yeah. So, obviously it's already installed on the laptop or I think we're not going to demo that. But maybe that's the time to introduce very quickly the companion website. Oh, yeah. Okay, maybe a little bit bigger? Yeah, bigger. Yeah, let's just get into the KubeCuttle itself just to show a couple of commands to do that. So, that's just the beginning. So, we're just creating some folder and because we're going to generate some YAML in that, of course, it's Kubernetes and we want to keep everything at the same place. Yeah, so for the people who doesn't have KubeCuttle installed on the laptops, please go ahead and run the command that you should run for your platform, whether it's Apple, Mac OS, Linux, Windows. So, the commands are here. If you have, like using Google Cloud, if you have GCloud, you can also install KubeCuttle for the GCloud components install. So, just to check that we have our KubeCuttle here. So, KubeCuttle version shows what is the version of the client KubeCuttle supports. So, here we have version for 1.25.3. Pretty much, we're running on the bleeding edge. So, as you know, Kubernetes 1.25 is the latest release today and our community of Kubernetes maintainers are working to have a support for Kubernetes 1.26 in, I think, a couple of weeks. And as you know, Kubernetes has right now release cycle three releases per year. So, it used to be four, now it's three. So, that is gonna change. So, right now you see that the connection to the server local host was refused. This is because we don't have any Kubernetes clusters installed on this laptop. And that's normal. Yeah, another thing probably that worth to mention, sometimes people install KubeCuttle on their laptop and after a couple of years, they decide, okay, let me refresh my knowledge. Obviously, the Kubernetes keep moving forward, right? And if you wanna be successful and have a good experience, try to make sure that your KubeCuttle client version is at least one release below or should be like exactly the same release. The reason for that is Kubernetes sometimes deprecates some APIs and there are some changes in the schema, so your experience with Kubernetes might not be as good. So, make sure you stay up to date. And when you're printing out this version, KubeCuttle version command, you see that we have a version difference here already signaled as a warning. And obviously, if you have much bigger delta, then you might have some problems unexpectedly happening and you'll be troubleshooting things that you shouldn't be troubleshooting. And that's where a package manager like Brew on MyCoS or even the GCloud command line that can install some, this kind of stuff, it will warn you if you get outdated. So it's easier to keep up with the new versions. All right, cool. So anybody has problems with KubeCuttle installation? Just, I'm sure there's no problems, but I'm sure the problem is coming when the Kubernetes installation starts. So, okay, so now we have our KubeCuttle installed and obviously we need to have a Kubernetes cluster. So hopefully everybody have, you know, Kubernetes installed on your laptop and we probably want to give everyone some time to check. Maybe your laptop has been shut off or just make sure, you know, if you have any Kubernetes cluster installed on your laptop, try to bring it up and kick off your process to get started. So probably until we finish that step, we cannot really move forward, but we wanna quickly discuss about, like, while you're setting things up and bringing up your Kubernetes cluster, probably just touch base on the area where I think, I've seen a lot of questions coming from community and from companies, from industry. So let's imagine you are working in a company and you need to work with Kubernetes day to day and you need to answer to the question, like, what is the requirements I should have in order to be able to develop, build my images, you know, run containers on my laptop and deploy them to Kubernetes cluster, right? So I think this is the first three requirements that you really should kind of cross check and it sounds very obvious, right? Like, you know, Docker build, Docker run, keep got all create deployment, those are like things that you probably do day and night. But like, those are the things that you should be probably must have. The other things that I think less important is something, you know, capability to run Docker compose. I think Docker compose is a pretty nice solution in the sense that you have a nice little YAML that you can put multiple applications to do Docker compose up, it's gonna bring it up. Unfortunately, Kubernetes is much more complex. So instead of one YAML, you're gonna have 50 YAMLs. So we've seen a lot of people in community that like, oh, Docker compose was nice, but you know, we have to adapt to the Kubernetes as well. So I see less and less interest, you know, around using Docker compose. Things like scanning images, I think might be important for some organization and there's a lot of open source solutions out there, like 3D for example, that you can install and do similar things. And then for the UI, I think it's, you know, like it's not as important, but some people like UI's. So I put this requirements as an optional. And obviously the solution for that is like 99%, maybe 95%, people are running Docker desktop, right? Like for the marketplace and Windows. Do you guys agree or disagree? Good. You have four agreement. Four? So what is, what else are you guys running on the Mac and Windows today? Okay, good. Some traction for renter. Wait, wait, don't spoil the talk. So yeah, is any like Linux users in the house like Linux? Yeah, so Linux folks are in a happy side of the world. They don't need to really have Docker desktop, right? Because they can just up get install Docker and up get install maybe QBDM, whatever Kubernetes cluster. So this is not a problem for them and they're not really looking for Docker desktop experience. But if you're on Apple and if you're on Windows, there's high chance that you're using it. And this is something I think I was running on our laptops for the last couple of years. It was the default and only option a few years back actually. Yeah, and if you maybe forgot or if you don't remember like, that's where you can find the Docker desktop on your laptop for the Mac. And like maybe if it shut down, please bring it up. We wanna use that Docker desktop right now. Yeah, maybe that's the time to say again, don't install something else today if you have Docker desktop because you're going to burn the wifi down. Yeah, exactly. So some gentlemen in the room actually brought up valid concern. So myself and Sebastian, we're working in companies with over 250 employers. And last year Docker basically said that, if your company is larger than 250 people, you basically have to pay the subscription license. And this is kind of not nice for a lot of people. Maybe they still like, you know, didn't figure out that issue. And like I cannot install Docker desktop period because I'm gonna get in trouble. And recently they actually announced a raise. They're gonna increase the price for team and business, I think, subscriptions. So the browse for the Docker desktop multi-platform amazing nice UI. I think it's a great experience, but unfortunately it's a paid product for enterprise. I personally was annoyed like coming to work and every time my laptop was updating Docker desktop, this was like pretty much twice, three times in a week. And it was like high CPU consumption, but like this is something, you know, less negative, at least it was free before, but not anymore. So I actually spoke to many people and they were asking like, what I should do now? Like I cannot use Docker desktop, but I wanna use Kubernetes on my cluster. And actually we put some of the solutions for you out there. So if your manager or somebody from your colleagues is gonna ask you a question, you actually have several options today that you can actually take advantage. And we're gonna quickly cover some of them while your laptops are booting. So my favorite option today, I think is going to be Rancher desktop. It's a pretty new solution. I would say it's last year has been announced. It's free open source backed by company, but you know, so far so good. The reason I like Rancher desktop, first of all that, you know, it supports container D, which is like right now at default runtime for Kubernetes, but also it has support for Docker, just in case if you're still like running legacy Kubernetes clusters, you can choose the version of Kubernetes cluster you wanna deploy, but most importantly, I think the way they architected this solution, it's pretty nice. So they actually using Nerd CTL. So Nerd CTL is basically a replacement for Docker CLI. So if you, you know, even like Docker desktop, it's they saying if you cannot even use Docker CLI anymore, if you have it, so like you can use. So you need to find the replacement for it, right? And Nerd couple is basically a alternative exactly the same, all the commands lines are the same that you can use in order to talk to Kubernetes cluster and to build your images and you know, basically it's based on container D. Another, you know, obviously another solution that they interest like, which is I really like as well is that the Kubernetes itself is running K3S. So K3S is a lightweight Kubernetes clusters. So that makes the platform lightweight as well. And K3S has been built for, you know, smaller clusters that running on the edge. So like it's a good use case for laptops, right? So the next option we have here and that would be my best, my preferred and that's usually the one I used on Mac, Colima. So Colima is built on Lima. Lima is Linux on Mac. Colima is container in Linux on Mac. It's, I think it was the first solution to support M1 Macs natively. So it's a simple kind of brew install Colima and then you Colima start, it's going to build up, use QMU on your laptop, create a Linux VM and then everything is executed into that. And it also bundles Kubernetes cluster within it and once again, K3S, the same as with Rancher. It has no UI at the moment and I guess when you're using that, you don't need a UI anyway but that's my personal take on that. And I guess after this talk, you will be done with your eyes. I hope. All right. And not mention it support Docker and container D. So that means you can also pick the one you want and stay close, let's say to your production clusters if you're using one of the other. Cool, yeah, no, that's absolutely. And finally, I would say maybe the least coolest, but it works. You know, if you're Pursman solution, like I think that's going to cut. So you can actually install Minikube. Minikube is basically fetches the VM that runs Docker engine in it and you can install Docker CLI on your laptop and then you'll be able to open up some ports on Minikube and connect to Docker engine that sits inside of the VM. So you pretty much have Docker working on your laptop. So you can do the build images, you can do push images and you have Kubernetes cluster where you can deploy things. So like if you don't have, you know, choice, I think that would work as well. But for today's tutorial, we decided to choose another option. It's called kind. So kind stands for Kubernetes and Docker. It's a tool to create super lightweight Kubernetes clusters and the reason they super lightweight is because they actually deploy it as a Docker images themselves, right? So if you wanna deploy control plane, it's gonna be one Docker image. If you wanna deploy a node, it's gonna be another Docker image. So that's why it's super lightweight and it's supported by Kubernetes 6 and it can be deployed for local Kubernetes clusters. But in the early days of kind, this was the solution to run CI for Kubernetes itself. But, you know, the use cases has expanded and I love to run kind on my machine because it's super fast. Archie, tell me, we removed the other slides about the other options, right? Is it the right time just to say about them? Say something like micro KS? Yeah, I think it's fine. K3S. Yeah, there's many options. And obviously, like, I think... Q0S. Yes, there's so many options, but I think we tried to find the options that can replace Docker desktop today. And if you're really looking for that solution, you have an answer and the slides is yours. So just to finish up with the kind, so we're actually gonna go ahead now and install the kind cluster, right? We can. Yeah, we can. Yeah, okay. So there's one caveat actually with kind, right? So as you remember, kind itself is deployed as a Docker containers, right? So you need to have Docker engine somehow running on your laptop, but it's not free anymore with Docker desktop. So again, for the Mac and Windows users, you need to find a solution where you can actually deploy that Docker image to that spins up the Kubernetes cluster. And we found the solution, which is called Podman. So Podman is actually a solution from Red Hat guys and it's replacing Docker engine requirement for your laptop. And it's also provides a rootless containers. I'm not a security specialist, but everybody says this is super cool and provides, you know, much smaller footprint. I don't know for the personal laptop, if it's a big concern, but like I'm sure if you're running in production, you know, you don't want to run containers as well. So Podman is a Swiss Army knife that is maintained by Red Hat, replaces Docker for desktop, Docker engine and Docker CLI. Podman itself doesn't run Kubernetes, right? It's just replacement for Docker engine and Docker CLI. And it works for Windows, Mac and Linux. And for people running M1, Macs it's also supported. So yeah. Podman is far more than just a replacement for the Docker thing. There is a lot of tooling and new comments that you can use with Podman. It managed different kind of images, container images, how do you call that type? Formats. So it usually builds faster than using the regular Docker, for example. It can build and deploy rootless. That means you're sure you're, whatever you deploy, the container you deploy, it's not going to temporary or your laptop, for example. And it can be used even in your CI and UCD and why not in production? I guess some are using that, I guess. Yeah. And for people who loves UI, they also have a Podman desktop. So like you can see your images and scams and stuff like that. So anybody's using Podman, just I'm so curious. Oh. All right. So we're not alone. So yeah, so we'll have a quick demo to install Podman and kind. Again, like we're not recommending you download anything at this Wi-Fi, but you can try it at your own risk. So you want to do it or you want me to do it? So yeah, obviously you have nothing. You just grab the Podman command line. But it's just a CLI. And then you need to bootstrap an image or something to run the containers. Kind of a virtual machine. So Podman is intelligent enough to find on which kind of OS you're using and do some stuff a little bit differently. So on my OS, as every other solutions, they are going, it's using an emulator. It's creating a Linux VM and then it runs things inside the VM. Yeah. So in our case, we already installed Podman because it's I think 600 meg image. So we're going to go ahead and check. Yeah. So Podman is up. So Podman info is giving a ton of information about obviously the VM, the memory, the whatever you have installed. So what is interesting here in Podman, it's not much about Podman itself but it's going to create a virtual machine in your laptop. And you can change the CPU memory disk size and I recommend you do that because the default is very small. Unless you don't want to run a very big Kubernetes cluster or deploy many things, you should increase that a little bit. So we're not demonstrating that because one, so it's 600 megs to download and it's not very interesting in the end, I guess. What we, or what I usually do, again, what we're seeing here are not recommendation. It's what we're usually doing and it's what's making us like working faster with Kubernetes. Just replace the Docker command line if you have it and be careful because every Docker for desktop upgrade will reset that. I usually remove the Docker command line and I replace that with Podman. And I don't use aliases because some commands are hard-coded for using Docker. So if you just alias in your shell and you run something, it's not going to work. You're going to have very strange errors and just because the Docker command line or the application is looking for the Docker options or Docker stuff that are not working, actually, with Podman or is working a different way. Most of the time it's about the socket to reach to the Docker daemon. Yeah, so I'm just running Podman search command just to show what it can pull out from the Docker hub. Let's deploy quickly a container. So we're just deploying latest alpine. That's going to run for 20 seconds and just to prove that everything is running very similar command to Docker PS-Dashé. So yeah, the Podman command line is 100% compatible with Docker. So whatever you do, Docker, you can do Podman. And if you switch the binary, then you can just continue with Docker but it's going to call Podman in the background. Yeah, and you can see we're running Docker PS and it really didn't works because we just basically hard-linked it. It's basically just pointing to the Podman but it's actually written as Docker, so. All right, so that's done. We have a Podman running, up and running. So the next thing we want to do is to install kind. So we already pulled up kind itself and we're going to run kind create cluster command. And obviously, if you want to make sure your deployment is persistent, you can also have option to deploy it with kind configuration file where you can specify how many nodes you want to have. So kind is pretty flexible. You can deploy multiple control planes, multiple nodes if you want. But obviously, for the purpose of the demo, we just went with one control plane node and one worker node. We're installing Kubernetes-124 and anything else to call out C&I, C&I's default? Yeah, we're just for warning. We're not going to use that and we're not actually digging into that. So go read the documentation but we're for warning some ports from the node, from the local node to be reachable inside our cluster. So that way you can install an ingress controller, for example, inside the cluster later on and reach that from your laptop directly. Yes, so you can go with the basic command can't create cluster, it's gonna install it, but we here specifying a config file. I think you should see like it's pretty fast. If you already, in our case, we already have images pulled out so it should work pretty fast. I don't know if anybody's trying to do the same right now or hopefully not, but. Nobody's starting an account cluster right now? No, that's good, oh yeah, few hands there. How's it working? So the thing is when you first install Podman, it's going to download a Fedora core image that is going to be used for the virtual machine. Once you do that, you do that once usually, then you stop the machine, the virtual machine, but the image is still here. So even if you delete that and you restart it, you're not going to really download the 600 megs. It's the same for Podman because Podman is before, sorry, for Kine. Kine is creating containers in the end. So it pulls the image, it creates the containers and when the image are downloaded already, you can create as many clusters quickly because the image is already here and available. So we have first cluster deployed. If we do Podman S, PS, sorry, we see that we have two images, one for DevWorker and one for DevControlPlane and those are pretty much our Kubernetes nodes that is running as a Docker container, sorry. And just to prove that the cluster is up and running, if you do keep Kotl get nodes, we can see that we have two nodes, one is for control plane and one is for worker. This command keep Kotl clustering for just provides the information that is connected to Kubernetes API. And then if we're gonna run keep Kotl get pods in all namespaces, we see that we have cluster control planes, images running, such as keep proxy, keep controller, keep API server and DNS. So it's a real Kubernetes cluster actually. So let's deploy second one just for having two clusters so we can switch between them. So the difference here with the second one is instead of using one API server and one worker node where we don't have one API servers, that will also be the worker node. And for the sake of the demonstration, we're also changing the version. So this one is version 1.25.something, the two. This stanzi is pretty long and complicated. It's the full hash to whatever and it changes every time there is a new kind release. So when you upgrade kind and you want to change or use the latest version of Kubernetes, the recommended way is you go on the kind project in the release page and you will have the links, the full name of the releases. That's usually how you want to do that. So that's why it's, and so what I do is this kind cluster YAML file, I store that in my Kubernetes folder. And so if like I can reuse this file to deploy the exact same cluster, same configuration on another node, on another laptop or I can totally destroy the cluster and recreate it exactly the same. So that's very useful when you want to trash the cluster, you want to deploy something to break it and you can recreate it exactly the same, same versions, same everything. Yeah, so we have both clusters up and running. Cool. So that's also another difference is because you have this config file, it's very easy to bootstrap multiple clusters on the same laptop, which is a little bit harder maybe, or even impossible with Docker desktop or even Colima. I don't even know how to do that, having multiple clusters by default with Colima. Maybe it's possible, I don't know, but kind is just your Kubernetes cluster. So you can start the number of clusters you want. Yeah, so okay, so now we have KubeCuddle, we have Kubernetes cluster deployed. Now the next step is basically to connect KubeCuddle to Kubernetes cluster, right? Basically want to authenticate. And this is pretty much the scenario where you have a laptop and you need to be able to communicate through Kubernetes API and talk to Kubernetes. How we can do that? Anybody remember or knows what we need in order to be able to initiate this communication, authentication? Yeah, so we need a KubeConfig file that basically tells us where to connect and how to connect. And I have actually a quick demo just to show how we can work with the KubeConfig file. So usually the once Kubernetes clusters of kind get deployed, what it's gonna do is it will automatically update KubeConfig file. So it will already populate all the configuration for us. So it's kind of easy start. It's kind of a managed file. Usually you don't want to go play in that because when you have multiple clusters with like security certificates in that for authentication, it gets very like big and difficult to read. So there's two in around that. But that's where everything happened actually. Exactly. So what we have inside of the KubeConfig file is actually it's pretty simple file, right? I tried to minimize it as much as I can. But basically you have here clusters, contacts and users, right? So clusters, if we open up clusters, it's pretty easy. Basically this cluster is trying to connect to GKE cluster. This cluster is trying to connect to one of the kind dev local cluster. So the IP address you see here is the API controller IP address that it communicates. So this is very similar to VMWall, right? You need an IP to connect. Next thing we want to touch base is users. So as you know Kubernetes doesn't have users. So the way we have established this communication, for example, for kind clusters, we are creating, during the Kubernetes cluster bootstrap, it generates certificate keys from that cluster. And it's happened many times where you bootstrap, you get the certificates and they have expiration date one year. So if you didn't upgrade your cluster for one year, you might get expired certificate. So something to look for. For the cloud, like for example GKE, you can see here they have actually a little bit more authentication provider configuration. So this is more secure. But essentially, this is a certificate that's coming from cluster that lets communicate with the Kubernetes. And then we have contacts, right? So contacts is basically tells where to connect. And then we have also the current contacts. So current contacts is basically tells right now my kubeconfig file is looking at staging cluster. And the reason it's looking for the staging cluster because the last cluster we've deployed in the demo was staging cluster. So it's probably you're gonna ask how we can switch that to another one. And this is actually pretty straightforward, but still complicated, at least for me, because there's many commands that you need to run in order to get there. So the commands are, so you can use kubecuddle config command. And if you run kubecuddle current context, you see that it's kind staging, which was exactly the same thing in the kubeconfig file. We can also see what type of context we have in our kubeconfig file. So we can see here that we have kind def can staging and gke cluster. And we can, for example, switch back to kind def Kubernetes cluster by running kubecuddle config use context kind def. So once I do that, if I do get context, you can see that my current context switched from kind staging to kind def. So this is the way that you have to do, like every time to switch between clusters. It's not the best option, and we're gonna talk a little bit more in the future, like once we go to a more advanced tool, how to simplify this pain. But for now, we're gonna go forward. And in general, you can also like, there's some people like to split up this kubeconfig files into multiple files. So this is also covered here, if you wanna do that, you can try it. So I get it, it's boring, right? But that's how Kubernetes works. So we want to explain like, what's the basic? And then we're going to provide you with different tooling to ease that and to be faster with all that. All right, so now you have Kubernetes cluster, you authenticated with your kubecuddle command. The next thing you wanna do is deploy resources, right? So we have two commands kubecuddle create and kubecuddle apply. And obviously, like the first steps that you might do is to run kubecuddle create deployment, create service, create config map. This is more imperative way where you basically specifying similar to what we did before with virtual machines just running commands. This method is not recommended, I would say it's good to starting. We always recommend to use YAML manifest because it's a declarative way. You basically define your state in the YAML file and then kubecuddle will apply this request to Kubernetes cluster. And we have something called control manager that will every now and then synchronize the state of the, what is stored in itcd to the Kubernetes cluster. So we call this declarative way and this is the beauty of Kubernetes. And we highly recommend you to use manifest files. So let's do a quick demo, deploy some applications with kubecuddle create and apply. You guys trying to follow at the same time? How's it going? So far so good? I have thumbs up, so. All right. Not bad, bad. Keep on, keep on. We're getting to the interesting part of the talk, guys. Yeah, so the first thing I also like to mention before we're gonna go to deploying things, like we wanna quickly talk about the namespaces. So Kubernetes has, once you deploy Kubernetes cluster out of the box, it has multiple namespaces that it deploys. And the first namespace is called default. So this is where your current context is looking for every time you're gonna start with Kubernetes cluster, it's gonna be pretty much looking into the default namespace. This is not a good practice and we're gonna show how you can deploy things in the different namespaces. There's also cube node, least cube node public kube system. Those are the things that is created by the cluster itself. So kube system is a namespace that pretty much runs a control plane for Kubernetes. So let's create a namespace. So this is kubecuddle create a namespace kubecon. Why not? And then we can do kubecuddle get namespace to see that our kubecon namespace has been created. Yeah. Okay, so we see that in namespace kubecon, there's no pods right now deployed because we don't have anything right now, right? Oh, sorry. So let's deploy a, we're gonna just quickly deploy a nginx web server. Okay, you see, we run kubecuddle get pods, there's nothing and the reason is nothing because we're looking in the default namespace. So we wanna check what is deployed in the kubecon namespace and we can see that nginx is coming up, is getting created. Yeah, so we created a service and deployment inside of the kubecon namespace. So that's the first demo. And then we're going to create a pod that is basically created declaratively with YAML. And for that, we're actually using interesting command which is called kubecuddle run which creates pods and we're using dash or YAML that basically generates, dumps everything in the YAML file and we are using dry run client command which is basically doesn't apply it to the cluster but just on the client side. So let's quickly do it. So we created a simple pod YAML file and if you look inside, it's a pod that has been created. So this one? So yeah, it's a dumb way of creating a pod. So usually you use that when you want a very fast debug pod or just something very like fast. You will never do that in the real, mostly never do that in the real life. Yeah. So now we're creating a deployment file. So deployment, so pod is like the smallest, the resource that on Kubernetes cluster deployment is actually something more production ready can specify replicas in it. So we're gonna deploy a... This is not deploying. Run the next one. This is just generating a command down. There is another command which creates a faulty deployment. Why faulty? Because you don't have all the options. So when you cut all create deployment, you can define everything that is needed for the deployment to run exactly as you wish. All right. So the final thing we're doing is creating a service. Again, like we right now are doing everything with the create service command, which is pretty much... We're going to speed up because that's not interesting. And I guess most of you know about it. All right. So I'm gonna deploy a quickly base up. Maybe we're gonna, yeah. So now we're getting closer to the interesting thing. But we still need more in the cluster so to be able to demo. So it's a simple web application. So it's a go application, a web app that we build. We created an image. And there is my SQL database along that. They tied together. And so just sample application very, two deployments, two services. All right. So things are coming up slowly. So the difference here, we're using declarative way. We have generated the YAML file. And this is how it looks like. It looks there's more lines there. We tried to follow the best practices out at liveness prob, readiness prob, request resources. So this is how ideally you should looking into deploying application into Kubernetes cluster. That's what you would do in a real life because this YAML file would be in your Git repo. So let me introduce you again. All right. GitOps. Do GitOps. So you put all that YAML in a repo and you version that and you're good to go. Okay. So now we have the basic stuff. As you've seen, like we could cut all, could cut all, could cut all. Yeah. So yeah, but basically we finished the part where we set up things. Right. Now we're starting to getting into more interesting. So Luke Skywalker is getting older and starting to do more serious stuff. So first thing that we want to actually address is like that we're typing too much keep cut all right. You want to show it us? I don't know. Next one. Yeah. Yeah. So aliasing. So keep cut all. K, U, B, E, C, T, L. It's too long. Like let's use an aliasing. Like K equals keep cut all. So now it's just K. You put that in your shell. So every time you open a new terminal, you will have this alias already defined. I don't remember. Let's go there. I'm typing too much. Yeah. Just alias that. We copy that. We switched to terminal. It's not working. Okay. Alias K equal cut all. And now K get points. Like it's five letters already. Is that one, two, three, four, five, six letters? Less. That's the first trick. And seriously, you can't imagine how many people even train with Kubernetes. They don't use that. They don't know or they just like, they're used to type cut all and I'm sorry. The less I type, the more I feel. Kubernetes is based on APIs and object resources and pods deployments. Usually the by default, the resources are plurial. So you have an S at the end of most of them. Actually, you can remove the S. So again, one letter less to type every time. And they have short names too. So pod, okay, it's just PO. It's one letter, but for deployment, it's getting more interesting. For a stateful set, it's STS. So you're again, you're getting a lot less to type every time. So now that was for a kubectl, but you still have other comments to them. Like kubectl, poor forward, poor dash forward. Oh, I can handle that. So there is completion in most of the shells now, bash, CSH, or whatever. And most of the Kubernetes CLIs, whoever they are, they usually have a completion file. So you kubectl, completion, CSH, or kubectl, completion, bash, and it will dump a long list of things for the shell to be configured. So again, you usually put that in your shell configuration. So every time you start in shell, this is run, and then you have it. So I have nothing in this one actually at the moment. So I'm not sure I can demo that. It's okay, not maybe something else. Okay, let's go arguments too long. Maybe it will work if I do that. I don't know, let's check. Okay, port forward, and there you go. So I just type port tab, port forward. It's even better than that because what if I get pods, okay? But this is the default namespace. We deploy it also in the kubectl namespace. So minus and to target another namespace, and then tab. Oh, what do I have? So in the background, kubectl get namespace was run. Grab the list, and then I have all that as options. So I want to look in the kubectl, actually, kubetab. Oh, I have multiple kubesomething, so it's kubectl, and there I go. Every time I do a tab, it's going to do completion, and I can check the pods here. So this is a side note. I don't know why it's here in the presentation, but we had to put that somewhere, is that kubectl command and the go client for kubectl is sometimes using a lot of resources, like opening a lot of files, doing a lot of calls to the API server. Most of the time on Mac, the number of Macs open file is like 256 or something like that. So it's usually a good practice to raise that a little bit. So that's why it's here in the presentation. So okay, so now we're faster with kubectl. That's good, and we have hints, it can help us a little bit, but you can go even beyond that. Still me? Yeah, okay. So this talk is based on the ESH, so we picked omyshsh. Omyshsh is a framework that you install in your shell. It's going to bring up a lot of things already set up for you, shortcuts, and we're going to see a lot of stuff. Plugins, for example, here. And what plugins do, for example, let's have a look. Can we have a look? No, none yet. Okay, I'm going to finish these slides first. And there are themes that you can apply on omyshsh. So I used to use Agnoster's ESH 10 theme for a long time. It's very good. It's like bringing a lot of different things. And then I discovered the power level 10K. This theme is a beast, actually, and it's adding a lot, a lot of feature that we're going to demo in a moment. But for that to work the best, you need a specific font that they call PowerLine or Power Fonts, usually. So it's kind of a font with very specific design. And when you use that, let's say, to define the prompt in your shell, it's going to be way easier to read. So. You're great. There's the next one, I think. There is the next one. Wanna demo or is, I think this one is less demo, but let's see the next one, the KubeColor. Okay. So yeah, that's the kind of stuff you're going to add in your shell. And because omyshsh, if you add the KubeColor plugin, the alias that we used previously is going to be created by default. So actually we can remove whatever we added just before. So what do you get when you install that? So remember, we hear Orcadia is the name of my laptop. Everything is black and white. Maybe not obvious to read, okay? When you install the omyshsh plus the PowerLine shell, you get a prompt. And what is the prompt? The prompt is here on the left side and on the right side of your shell. You have some information by default. So all that can be changed to it and whatever. So here on the left, I have like Apple because I'm on a Mac, I guess. The smaller house, just because I'm in my default user folder. So if I go in the demo folder, for example, and you see when I do a tap, for example, completion is now red. So it's easier to read already. Okay, so then I'm in a folder about the demo. There's a lot of things that is coming with the installing this tool, like omyshsh and the theme. And I'm not an expert in that. Like it's bringing, maybe I'm using like 5% of the thing. And like people keep asking me, like why should I install that? Like it's doing too much for me. Well, I'm going to be lost. No, you're not because you can still work exactly as you were working before installing that. It's just that you're going to discover that this small feature or this other feature is helping. And you will grow with that with some time. If I go, let's say I go back here, I go in dev, personal, and I go to the cloud native, Canada here. And then this is our repo for the presentation. And you've seen the prompt is now green. And we have a small GitHub thing and we have a kind of, let's say a branch. And I'm actually on the branch last tuning right now. So I can like git branch, git branch main, main stress. Okay, I'm on the main branch. So now you see that we're here on the main branch and we know about it. Can you show a CoupColor thing? Yeah, so this is not about Kubernetes. Yeah, this, but, so yeah, no, let me go back to the shell. I'm sorry. This is not about Kubernetes. Okay, we go to our demo. And when we, Coup, oh, so by the way, there is the CoupColor plugin for all my CSH. And when you install that, what you get is alias, grep, CoupCTL, a ton of new aliases here. And all these are shortcuts to a lot of Kubernetes commands. Can I do a CoupColor get pod, for example? So yeah, if I want K for CoupColor, that's our alias. But I want to get pod, G, get and P pod. Bam, three letters and I have all my pods. Or services, no? S, S, K, get, S, I don't, you see? I'm not using that much actually. I don't know why I'm trained, I'm used to K something. So I'm using K instead of CoupColor. I'm not using all these aliases, but sometimes there are like, they can get very efficient and the more you're tapping and using the long comments like pod forward, roll out history, scale deployment, for example. That's three letters instead of a whole lot of comments. So we're switching now to CoupColor. So we, as we said, it's still black and white in the terminal and maybe not easy to read. CoupColor is a tool that was developed some time ago and was unmaintained in the last few years. So I decided recently to like clone the thing and making it live again, apply the patches. So it's straightforward to install, it's brew install. So there is, the new project is in CoupColor folder. So it's brew install, CoupColor, tap CoupColor. You can grab the version from the GitHub repo. You can install that directly from Go if you have the Go command line. So this is a way to set the version to latest, for example, but you can pick any version you want. And the CoupColor is kind of a replacement for CoupColor. It's not actually a replacement because CoupColor is going to call CoupColor in the background and is going to take the output from CoupColor and color it. So let's just have a demo. What did I do? So yeah, this is the difference. We had only black and white and now we have like black, white and blue. That's the- Because everything is running, but- That's one color better, okay? But what happened in the real life? You have crash pods, you have pods just started. So you can change all that. So it's pretty obvious that in the default namespace the test pod is not running here. It's pretty obvious that the age, it's only six minutes ago. So that's my new deployment actually that just fail. And yeah, and that works for most of the CoupColor comments. So whatever you type, it's going to be like very helpful. So the thing is alias, yes. Coup, CDL equals CoupColor. So by doing this, every time I type CoupColor it's actually CoupColor. So now we have an alias. So K, again, K get pod. Bam, I have the coloring here. I can describe K describe put go web app like the simple deployment, for example. Simple deployment, blah, blah, blah. And here again, I do have some coloring in here. So it's very easy straightforward to see what's going on here. The last message that is failing, it's a great helper. So that is CoupColor. Sleep? Okay, the next one. So the next tool is Stern. Stern, I don't know why. Who knows about Stern already? One, two, three, four, maybe five. It's a very interesting tool. Stern is a way to add colors and to filter how you get the logs from the pods. CoupColor logs is a command you run a ton of time. Usually you deploy something. It's not working. You want to go and see what's happening. You're going to grab the logs. What is Stern giving you is coloring with different colors, every pod. And if you have multiple containers inside the pod, there are going to be added here also. That means in one command you can, all the logs of all the containers inside a pod. Is that for service mesh in this case? Sidecars? Yeah, like sidecars or whatever. Like here we have a frontend, an API, and a SQL proxy all in one pod. I don't know why, don't ask me. But that's where it gets useful. And actually it's even more than that. If we switch to the demo here, if we switch, so CoupColor, so CoupColor you have some example here. You can go back. There's different ways to see that. Stern, so usually you would do that line. Or do we have the multi-deployment? Or this is the multi-deployment. So let's deploy that. It's another thing we're deploying. And then we can look at it. So K, get pod. Okay, we don't care, we did that already. K logs, okay. This is what you usually do is K logs in the default namespace with a selector. I want the logs from the application which has a label app equal multi-deployment. And that's usually what you do. And what you get is, because we have, it's a deployment, we have two pods. We're getting the logs from the two pods. But defaulted container first, out of first and second. That's, there's two containers inside the pod and K logs is only giving for the first container. If you want the second one, then you have to tell it, like I want second. And it's, okay, it's even not allowed when you, or maybe it's minus C, I don't remember. You see, I don't use that often. Okay, minus C to say, I want specifically the second container. And that's what we're getting here. But to stern, you can get much easier, right? And yeah, look at that stern multi. And then we have like in yellow and green, two different pods with each, the container first and second, which is named here first and second. So in one command, we can quickly grab all the same kind of pods. So that means the multi I just typed here is actually a selector. So of course, I can go multi deployment if I want. Or, okay, get pods. Or you can target one specific if you want. So you can put the full name. And there's a lot of other options we're not gonna, and as you see, it's tailing. That means if new logs are coming, they're going to be displayed. That's the equivalent to the W option on the Kubekalo logs. All right, thanks, Sebastian. We're not done already. Because we have another staff, which it's a new addition to what we do. So we're just going to redeploy. It's kind of another version of the same deployment here. The difference, as you can see here is, it's very dumb, we never do that. But when the container is starting, it's just echoing something, a message. And now the message is JSON. And that's a way to demo that with stern, you can dump. So the command is just here. Stern multi as before, but I want the output as JSON. And actually I can pipe that into JQ. And we're running very late. So we're going to move forward. But that's new options. You can JSON here or XJSON, which is giving, like, it's parsing the message itself as JSON too. So stern, like, it's one of the tool, I guess I use the most. Yep, thanks, Sebastian. Honestly, like, if you saw these two tools, right, you have to install them separately, like binary. And we actually, in Kubernetes community, we actually recently find a nicer way to actually integrate those solutions. So we have a Kube-Cuttle plugin mechanism right now where you can bring the different solutions, like Sebastian showed, to Kube-Cuttle. So now it's going to be part of the Kube-Cuttle command line, right? And the way you're going to be doing this is you can go to the website, which is called Crew. So Crew is a Kube-Cuttle plugin manager. And if you go on their website, you can actually see that today they have over 200 different interesting plugins that you can use. We're going to cover maybe a few of them, the most interesting ones, but like, I think this is a nice way to actually bring any nice ideas like Sebastian had with Kube-Color. It probably should be here at the plugin manager for Crew. So Crew is basically all it does is it styles the plugins for Kube-Cuttle. And one of the plugins that we want to show is actually, first plugin that we want to mention is something called Kube-Cuttle-NIT. So Kube-Cuttle-NIT lets you to strip up the YAML file. So let me give you just an example. I'm just going to show you. So the idea is when you Kube-Cuttle get something in the cluster, you will get the YAML, but you will get a ton of other information that you don't care about if you watch one is the actual YAML or even dump the YAML storage somewhere for reuse or store it, modify and reapply. So NIT is a plugin that is tripping everything that is not needed, that is specific to this deployment or this and is not needed to replicate or to store the YAML. So we don't need the demo. I think we're going to speed up a little bit now. Yeah, so if we do Kube-Cuttle to release now, you can see that those are the plugins we installed for our cluster. We have CTX, we have NIT, we have NS, Stern and some others. So like you probably get a guessing, okay, which one is a good one. So I'm actually going to show you a Kube-Cuttle NS and a Kube-Cuttle CTX. So remember that we have this process that we need to run Kube-Cuttle commands. You can do this much easier now with Kube-Cuttle NS. You can actually now easily switch. Not Kube-Cuttle, just all right. Yeah, Kube-Cuttle NS. Yeah, so that's the old way. Yeah, instead of Kube-Cuttle config, get context, for example. Yeah. Or Kube-Cuttle, get NS, for namespaces, it's maybe less obvious because K get NS or K NS, you're just like switching the get. But when you want to switch, it's just K NS and you can change the default namespace. So bam, you're in another one and whatever you type is going to be applied in this new namespace. Yeah, all right. Another plugin that we want to show is Kube-Cuttle CTX. So this one basically lets you easily switch clusters. So if around Kube-Cuttle CTX, you can see our current cluster is kind dev. So we can actually change it to staging with one command, right? So it's much easier experience than before we have to like type three commands to list current context and then switch to the context. So that's the thing you will probably use the most out of this demonstration is K NS and key CTX and switch back and forth. You want to talk about Kube or? Yeah, and we have another option. Like when you, as you saw, the context is written in the Kube config file. So it's global. So if you have multiple shells and you switch one to another namespace, all your shells are going to be switched to another namespace. So Kube is a way, it's a little application that is using the Kube-Cuttle mechanism and kind of stripping your configuration file to be specific to the shell. So when you use Kube, it works almost the same as CTX is Kube CTX and you select a cluster. And this cluster will be the only one known by the shell. That means you can then change the default context. It will not change in this shell. So if you have multiple shell, you can have one specific to your development and another shell for a staging, let's say, and the third one for production. Yeah, I like this for demos. When you wanna show like three clusters and run three commands on the different tabs, you can just use Kube and it's gonna have every tab or has its own context, right? It's pretty cool. There's also Kube exact. So it's a way to say, I want to execute that on these contexts. So you can target one command to many clusters. Like you want to get pod and see the difference between dev and prod and staging. One command and you can see all the pods and the difference between them. That is coming, like it's a lot of like, you can do a lot with that, but it gets very dangerous quickly because KDA let pod and you're going to let the production, actually, just because all wrong shell, I'm sorry, just destroy everything. How do you do to know exactly in which context you're using in which shell? That's where we have the prompts here. Yeah, so we have a slide for that. We have the prompt. So we talked about the prompt previously when we installed the O my CSH and the theme. But the theme is coming with some very interesting feature which is dynamic values in the prompt. And how you can see here on the lower side in pink here is the kind demo cube system. It's showing you which context you're in and which namespace you're in. So it's as easy as, as you see, I tap K and bam, can dev is shown on the right. See if I K and S, let's say, cube system and I type K again, now I have can dev which is still our cluster. So the contacts and cube system, which is the namespace. It's not showing the namespace where you're in the default. So all this is coming from the theme. It's working for K, it's working for a cube cuddle. It's working for Istio cuddle, for example. And all that is something that you can configure. Actually, when you install the theme, you have a P10K ZSH file that is created with all your configuration. And if we search for a cube cuddle, if we search for a cube cuddle, here we have a power level nine K. So some of the comments are still named about the previous version nine K instead of 10. But here is the list of all the comments that will act as if it was cube cuddle and will display this prompt, this dynamic prompt. And you have that for a cube cuddle, you have that for terraform, you have that for AWS CLI. You have, and there is a ton of information and stuff you can configure in this. And again, all this is coming like for free when you install the thing. And then you will learn about it and you will tune that to your specific needs and wants. Cool, all right. Thanks, I will carry the rest apart. So we obviously you can do similar things with your cloud providers, but we wanna move forward and just quickly touch base on the part how you can actually deploy your applications, right? So usually when you're deploying your application to that staging and production environment, there's gonna be some changes in your deployment, right? And what are the options you have out there in order to solve this challenge? My personal recommendation is, if you have something simpler to deploy, we have a nice tool called Customize, right? Customize is a Kubernetes native template free way to customize applications. So you have your existing YAML file and you create a customization file that will customize it for depth staging and production. And you can apply this manifest with changes to the different environment. So basically your existing manifest is not gonna change. So we call it base. You're gonna create a folder structure for your overlay, which is gonna be, but maybe in this case, depth staging and production. And you will say, I wanna modify my deployment only and I'm gonna be changing replica count, for example, for my file. So this is an example where we're using Customize. We have base.yaml, which is our untouched existing manifest. And we have patch.yaml, which is we call overlay. And when we combine those changes together, on the right side, you can see that the pod took the value from our patch, right? So you can imagine that basically you can apply the similar one, but for staging, similar one for production. So you'll be able to modify your deployment without any pretty much changes, configurations you can easily to modify it. This is on the left side. This is the structure of the Customize. So if you wanna build it, you have your base, which is your existing YAML manifest. In this case, there is a folder called depth. There could be other folders like staging and production. And that's where you're putting your customizations, right, and then you can build hydrated manifest that has already the values that needs to be deployed for dev environment and you can do kubectl deployment. Can we skip to the thing in specific? Sure. Now we have another tool which is called Helm, right? And Helm, it's a little bit different. It's something you can have to templateize, right? So we think Helm is a great solution for things that already exist out there. So if you wanna install any application that you see at KubeCon or any database, we have artifact hub that stores all the Helm charts and you can deploy any existing applications. So basically the way it works, you're pulling your chart from the artifact registry or artifact hub and you can deploy in your Kubernetes cluster any existing CNCF application. Yeah, the interesting part here, and because maybe it's not well known, it's when we deploy with Helm, I wanted to emphasize on Helm, this kind of command. There are quite new, I guess, maybe with Helm 3. Helm show values, so you can display the values that are defined in the chart. So if you don't know what to tune before you deploy, you can have a look at them very quickly. And then you have Helm templates to generate. So you see I've set some alerts here. So I'm changing some values that I grabbed just from the command above and I'm just generating the template for review. Then I can install. Once you install, Helm will create a secret in the namespace where you deploy the application. And in this namespace, it's going to keep a trace of what was deployed, what are the values, what are the YAML, and it's used for Helm to either rollback or keep track of what was deployed, what worked or not if the deployment was successful or not. I really like this command, Helm diff, that shows the difference between what you deployed before and you, let's say, applying a new chart with some modification is going to give you exactly what is going to change instead of trying to figure out all the changes that is coming in that chart. That's kind of the cool command. If you're using Helm, quite often, install the plugin for a diff. And then this secret that I was talking about, so you can see what was deployed in the cluster. In the Helm list, you will see what projects were deployed. History of the project, let's say we deployed Prometheus, we want to see the history, there was two deployments. And then you want to see the values that were used to deploy this application with Helm. So Helm get values and you can have all that. So all that is coming from the secret that was deployed. So you can also, because it's a secret, you can access it. So you see there is two secret because we had two deployments and you can get secret with this very funky command, base 64 two times and then unzip. You can see what's inside the secret and what was used for the deployment. So I think we're getting to the dark side of the things already. Like we're going to talk about UIs and this is like the ultimate stage. And the reason probably you want to use something like UIs, you already have things deployed on your cluster, but you want to try to figure out maybe what's happening. There's maybe many users on the cluster, keep cut all, get commands, not always as visible, right? So there's actually a lot of interesting options out there that we want to recommend. One of them is K9S. So K9S is basically a CLI slash UI. I don't know how you call this. It's a UI on top of Kubernetes. For the fall is like you can, if you have double screen, you open one terminal on the other screen, you start K9S and you are able to see live what's happening in the cluster with some metrics and information. Yeah, so you can basically stay in this UI and you can drill into the pods, for example, you want to know what's running, you can see the logs. You can even edit or delete some of the resources. So it's working with all the resources and your Kubernetes cluster. Yeah, you can do port forwarding out of this. I think there is obviously a curve to learn, but once you look into the resources and helps, like it's starting to make sense. So we have a few things in the tutorial that you can probably go ahead and also check at home. Yeah, we're getting late, it's totally late. We have a ton of stuff to talk about. The other one that we would recommend is Lens. Lens is the full application, you install that. It gets access to all the resources. It's straightforward, it's multi-cluster, so it's grabbing the clusters you have in your group configuration, and you can switch from one to the other. The interesting part that I kind of just learned, and I'm not into UIs, but if you are, I just talk about the Helm secrets, you can drill in the Helm secrets directly from this UI. It's, if you have Prometheus deployed, it's going to grab the metrics from your cluster. So you have in the UI already like a view of how the cluster is behaving. And if you're into, you deploy the port and you want to port forward into it to check what's running on this port, you can do that directly from the UI. So it's a very neat tool if you're into UIs. Yeah, now obviously it's really to observe things happening and the nice thing about Lens is deploy it not inside of the Kubernetes cluster, it's sitting on your laptop and this lets you to connect to many clusters and have a single pane of looking. And it says, you know. So this is our last minute. So I'm sorry, we were like, we spent too much time at the beginning, I guess. Look at the slides. Look at the, you can download the slides from the schedule website. Look at the content of the website in the GitHub repo. There is a ton of stuff like, there's only three VS code extensions, but they are like worth it. Seriously, oh, we switched over here. There is different other tooling. There is diesel, diesel is like JQ, but works with the YAML, JSON and whatever. So it's replaced JQ, YQ and whatever. Have a look at that, it's. Yeah, and we're going to keep the site up and running. So, you know, if you want to come back home and run through the steps, it's going to be there. And yeah, please leave your feedback. You know, we're happy to improve. And yeah, we open for questions. Thank you very much. But we're here, if you have some questions, is there anything after? All right, I was just wondering, is there a recommendation between like, if combining home with Customize, like, is that considered bad practice? Is that okay? Like, do you have any opinions on that? It's a great question. We had no time to talk about that, but yeah, the Customize, that's why we recommend, I think it's working. We recommend using the full version of Customize. Actually, Customize is embedded in KubeCuttle, but it's a lighter version. You don't have access to all the features. Sometime you may find some blocking things if you use the full version, no problem. And Customize can use plugins. So plugins can be just a shell script or something or a container that is going to be run. So obviously you can have a Helm plugin that is going to render the chart as YAML and pass that into Customize. Then you can use Customize to apply a patch or drop or add something into the final YAML. So yeah, Customize is an nifty tool and it totally complements and help Helm because like, I'm not a Helm lover and for, I guess, for good reasons and Customize can patch what Helm is not able to do. What was that command you did that brought up all the aliases earlier? I missed that. I'm sorry, can you repeat? What, you did some command that brought up a list of aliases of kubectl and... So, okay, how to list the aliases that are bring up by the theme and omai.hzsh. So it's just the alias command in the shell. It's going to list all the aliases that are created. So I just showed you some of the kubectl aliases but actually there is a ton other aliases for git commands, for example. The exact same thing, instead of git checkout something it's jc, like, I don't know them. I'm actually not using them much but there is a ton to explore. This is omai.hzsh, so if you, like, as soon as you install you'll get all of these aliases and I think it's better than you'll be doing it yourself. Yeah, that's good. Any other questions? Oh, links there, okay, sure. So the slides, if you go to the schedule website or yes, at the bottom of the schedule there is the slide. Yeah, and I'm just gonna put the link for the website itself. So thanks again for attending. Thank you very much. I hope you learned something.