 Hello, and thank you for joining us for the SIG CLI intro session. My name is Eddie Zaneski, and I'm joined by Phil Whitrock, Sean Sullivan, and Maciej Szulik. SIG CLI is the special interest group for the command line tools inside the Kubernetes project. We don't necessarily work on every tool, but we provide libraries and frameworks that they build out of. Today, we have a couple quick short updates for you, and we'll have plenty of time at the end for your questions. Obviously, though, the most burning question you have is, how do you pronounce this tool? There is no official pronunciation. There is a loco and a mascot, so I will let you make your own decision there. I personally say Cube CTL. We have a bunch of amazing contributors, two chairs that oversee governance of the SIG, two tech leads that oversee steering. We're made up of a handful of different sub-projects. Cube Control is the command line tool that you use to talk to a Kubernetes cluster. Customize lets you modify and manipulate YAML manifests that come down without making changes to the original. Crew in the Crew Index is the package manager for Cube CTL plugins. I think we're up to 120 plugins right now, so you should definitely go check some of those out. CUE is an awesome project that provides an interactive graphical tool that you can use to work with Cube Control and its output. And then, as I mentioned, we have a bunch of other libraries that are used for building command line tools. You can find us on Slack in the SIG CLI channel. We have a mailing list under Google Groups. We have a bi-weekly meeting Wednesdays at 9 a.m. Pacific Time, and we have a monthly bug scrub where we go over and triage open issues. This is great for people who want to get involved in the SIG. SIG CLI and Cube CTL has grown interest over the time. If you join the mailing list, you'll receive calendar invites to the meetings I mentioned. We are always welcoming new contributors, any feedback you have, and hopefully we have some cool things cooked up for you in the roadmap in the next couple of releases. So thanks for joining us, and I hope you enjoy the sessions. Hi, my name's Bill Whitrock. I'm here talking at the SIG CLI deep dive session. I'm going to talk about something we're calling resource functions in SIG CLI. These are really about how do we model controllers on the client side, and how do we take the level triggered holistic reconciliation of the system and take those concepts on the client. So for context, the way controllers work is if you have a deployment controller here, its job is when a user specifies a deployment to manage replica sets. So for instance, go create the replica set for this deployment, but what makes this different than the client side patterns we see today around templating, for instance, is that this controller also reads the replica sets before deciding what to write. So it also writes back to the deployment the status. Now this is the one that I think is important. It looks at the whole state of the system and it says, okay, do I need to delete a replica set, create a replica set, scale one, up or down, and this makes sense in a dynamic changing system to do sort of the level triggered self healing approach. And a while back, this project called Meta Controller came along. The way Meta Controller works is it said, instead of setting up all this logic for watching resources and figuring out how to list them and read them and then write them back out, what if we just wrote this decoupled, how do I want the existing state of the system changed, right, from how do I read the existing state of the system and how do I trigger this thing and how do I write it back out. And what I like about this model is while the Meta Controller model reads and writes from the API server and the framework provides the resources that in our case, we could take the same approach and say, hey, what if it read from a file, right, and wrote back to a file or wrote to a different file or wrote to standard out, right. And so now we can take this approach of looking at a holistic state of the system and defining some business logic that knows how to take a set of inputs, change the set of outputs and apply it to how we manage our configuration and define our applications. And the notion of saying, hey, I want to write a system where I take in a set of resources and then apply some transformations to them before they get to their final destination is not new, right. So this is how customized is architected where you have customizations that it doesn't require a set of resources and then modify them, apply patches, apply annotations, apply prefixes and suffixes, and then pass them on to the next one, right. And you can kind of think of these similar to, you know, that Meta Controller function that we demonstrated and these look kind of like controllers, which is really finally scoped to a static set of transformations like patches, right, instead of arbitrary transformations. The second line is kind of the Helm flag for post rendering, right. And so you can have some executable, modify the output of Helm before it's sent to the API server, right. And again, this looks kind of similar to the Meta Controller model where you provide some executable, Helm is going to send some right a set of resources to you and then you're going to emit them and then these go back to that API server, right. So this model of writing logic which reads resources transforms them right from back out is taking root in a couple different places. We just focus in on like the core principle of what it's trying to do, how controllers do this. We can actually start to do some new and interesting things. So one thing we can do is with Customize, sorry with Helm, Helm Customize Parry. This is a common thing folks do when they have a Helm chart, they want to change the output in some way. They don't want to change the underlying chart. So they use Customize as a post rendering step for the Helm resources to apply patch, right. And while this is a cool thing to do and people find it successful, one limitation of this approach is the post transformation doesn't have any context of the Helm chart, right. So if you're doing a Spring Boot application, you're applying patches to blindly at a resource level, not at the Spring Boot level, right. And so if you have A comes out and then you patch A and then tomorrow B comes out, we have to write a different patch, right. You have to change this patch to patch B, right. Because this doesn't really have any context on what this stuff is. So there's two kind of cool things we can do here. One is your place like static patches and stuff and if you have something like a common cross cutting piece like a log, log injector, right. You want to make sure a log rotator is installed as a side graph container. You can actually do it as a function, right. And then the function can maybe look at an annotation for a logger on resources and then find everything that's annotated with that and then figure out how to put the logger in, right. And it can not just do something a patch can do, which is like matched to these and patch them. But it can, for instance, look at what type of logger to install based on what type of, whether it's a staple set or a deployment or other things about other aspects, right. It can dynamically reconfigure itself based on what it sees as simply. The second kind of cool pattern is that instead of patching a resource on the way out, right. So in this case, we are modifying resources to have a logger on the way out. We can actually provide the patches input to a function, right. So if you replace maybe your template with a function that is going to act more like a deployment controller, then you can have any typical values as input, but you can also specify maybe the deployment that this thing would normally generate, right. So you annotate this with your spring boot. This is a spring boot function. And rather than just spitting out a deployment that it generates, you know, by rendering these values, it can actually find this deployment that and that was specified and given to the function. It can say, I'm actually going to decorate this deployment given to me. Now there's no patching. There's no post step where we figure out how to take the output and change it because as part of the input, you're specifying actually what you want. And this function has the context to look at this thing and say, hey, you're specifying some environment variable here. But you know, what you specified in the values file here doesn't make sense, right, through this combination and only something like this function would have that context the way a controller would. So those are kind of some of the interesting ways of looking at functions and basically using them to augment existing solutions and using them as post wringers or as parts of customized extensions or other places and other tools and figure out how can we bring the sort of holistic approach of looking at not just like here's some input, produce some output, but here's some or not just here's some like abstraction sort of template inputs, but also looking at here's some extraction, you know, template inputs and here's a set, here's some additional state like the observed state, if you will, the equivalent of those replica sets for the deployment controller and say, instead of just going off and doing your own thing and building a new thing, like take this and modify it in place to make it look the way I want it to. Anyway, thanks and have a great coupon. Hello, I'm Sean Sullivan and I am a co-chair of the 6C-Align and what I'd like to talk to you today about is how to reuse coupon control code. So as we all know, coupon control is the standard command line client for communicating and controlling a Kubernetes cluster. And because of that, there is already a significant amount of well used, well tested battle hardened code for implementing functionality to communicate with our clusters. So wouldn't it be cool if we could reuse this code? And the good news is we can. So over many quarters, the 6CLI has been working to refactor the coupon control code to move it into a location called staging. Now in this staging location, the coupon control code is now significantly easier to import and reuse. And I've included a URL here to show you where the coupon control code now lives. And because it lives in staging in this new location, we can easily import coupon control code with a simple straightforward import kates.io coupon control statement. So during this brief overview, I'm going to use the coupon control tank command in order to show how we can reuse coupon control code. But what is coupon control taint? Or what is a taint? So a taint is basically extra metadata that we're going to put on nodes that allows more advanced scheduling opportunities. And an example of this command is coupon control tank nodes foo foo being the name of the node, and then a label and an effect. So it turns out that almost all coupon control commands follow a particular pattern. And the pattern is there is a struct created that contains all of the fields necessary to run a particular command. And then there are three functions which work on that option struct in order to implement the command. And these three commands are a complete command. The complete command will fill in the values of the struct. The validate command will unsurprisingly ensure that all the fields are valid for the struct. And then the execution of the command is is an options dot run function. And for for our example, tank command, there is the taint options. There is a taint options dot complete, a taint options dot validate, and then a taint options dot run taint, which actually executes the taint functionality. So here is actually code in in staging, implementing taint, the taint command. And here is what the taint options structure looks like. So it's going to have particular fields that are necessary and used for the implementation of the taint command. And then further down, we actually see how this struct is this taint options structure is used. So here's the command, here's where we created the taint options. And then here are the three invocations of the functions necessary to implement the taint command, the three that I mentioned before complete, validate, and run taint the execution. So what if I wanted to use taint this taint functionality in one of my applications? Well, I would simply import this taint code base. And this is this case.io coop control package command taint is the location that I would import. I would create a taint option structure. And then massive hand waving here, I would fill in and validate the fields for that option structure. And I would call run taint on that options. So because we have such a small amount of time, I'm not going to be able to dig into the actual the actual filling in or validating those fields. But all I want to convey is the general overview of how we would reuse coop control code. So in summary, if you don't remember anything else from this short discussion, it's, you can reuse coop control code in your own applications. Thank you very much. I'm going to show you a few helpful tricks you might not have heard before when using cube control. Let's get going. I'm going to create a sentence pod. And I'm going to copy cube cuddle binary into it. And I'll explain a little bit why. So cube cuddle has several places that it will search for configuration. You are probably familiar with home slash cube directory, which is the default directory where cube cuddle stores its configuration and cache files. You've also heard about dash dash cube config flag or cube config environment variable, which allows passing configuration file location. But not many users know that there's also one another location that cube cuddle will look for when loading files, the so called in cluster configuration. Since a picture is worth 1000 words, it's best to show you how this works with an example. So inside of my pod, I can invoke cube cuddle get pod operation. Unfortunately, this operation will end with an error, which says that my user does not have the necessary permissions to perform that operation. If you look closely, though, you'll notice that the username is actually a default service account assigned to my pod. This is in cluster configuration kicking in with checks presence of three elements, a file, a service account token located in slash far slash run slash secret slash Kubernetes that IO slash service account slash token into environment variables, Kubernetes service host and Kubernetes service port. But it finds all three of these, it knows it's actually running inside of a Kubernetes cluster, and it should read injected data to talk to the cluster. In my cluster, I have a set of pre existing cluster roles, one of which is view. Let's create a cluster role binding that will add the view role to my service account. And let's again try to repeat the get pods operation. And now this time it succeeded. Perfect. Let's on both. Let's move on to the next example. I'm going to change my user to be a non system admin. And now I'm going to try to get pods from my previous example. It's forbidden. So Kubernetes has a unique pseudo like capabilities. It is called user impersonation. To use it, you need to pass dash dash as flag and a username that you want to impersonate. So let's try to impersonate system admin. Unfortunately, that operation will fail, because we don't have the necessary permission, similarly to how you do it in unique systems. Namely, we need to have a impersonate verb permission. In my cluster, again, I have a pre existing cluster role of cold sudor. Let's again add the sudor role to my user. And let's again try to impersonate the system admin. This time this operation works to prove that the impersonation is working as expected. Let's try again invoking the same command but without impersonation. And as expected, this operation will fail. There's also dash dash as dash group flag, which allows to impersonate a group, but I'll leave this as an exercise to the viewers. The final two examples that I would like to present is Cipcuddle edit command by default will use V editor on Linux and notepad editor on Windows. But if you want to change the Cipcuddle edit editor to your own favorite, all you need to do is use Cip editor environment label. In my case, I'm going to use VS code. And I'm going to edit that same pod that we've been working with. Let's try to change the sentence image. Save that one. Okay, and let's try to verify the image of the modified pod. And as expected, it is sent us a. As a final example, there is a command new alpha command that we are hoping to to promote to beta in 120 cube cuddle debug debug allows debugging your application. And best to learn all its possibilities and capabilities is by giving it a try. So happy debugging. Thank you very much for your time. And now we're going to answer some of your questions.