 My name is Eddie Zaneski. I am a developer advocate at AWS and I as well am a co-chair of CIG CLI. So CIG CLI leadership is the three of us and then also Machae who is not able to be here today. So what is CIG CLI? What are we here to talk about? As the name suggests, we focus on the client side tooling, but CIG CLI doesn't mean that we own every conceivable CLI that has something to do with Kubernetes. That would be a ton of stuff and pretty crazy. What we focus on is more the development and standardization of the CLI framework and its dependencies, as well as the establishment of conventions for running CLI commands and standards such as POSIX compliance. We also want to improve command line tooling from a developer and DevOps user experience and usability perspective. That said, we do own some specific code and some specific command line tools, some of them you may have definitely heard of before such as KubeCuttle and Customize, and Crew, Crew being the plug-in framework for KubeCuttle. Our latest addition is a tool called Kui, which is a framework for adding graphical elements to command line tools, and that is recently donated by IBM and we're very excited to have the Kui project to be part of CIG CLI. In addition to that, we own the CLI Utils, the CLI Runtime and CLI Experimental code bases, which contain more tools and experiments that we've been doing around CLI packages. You can find us in the CIG CLI channel that is in the Kubernetes Slack, and you can also reach us on the mailing list, which is Kubernetes CIG CLI. Perhaps the best way to get involved and to reach us though is at one of our weekly meetings. Our meetings are every single Wednesday, bar and cancellations at 9 a.m. Pacific time, and they alternate between a normal CIG meeting where we have an agenda that anyone can bring the items for discussion, so you just go and you add your item to the Google Doc and we'll all be happy to talk to you about it. And on the alternating weeks, we have a bug scrub, and the bug scrubs themselves alternate between being focused on Customize and being focused on KubeCuttle. You can come and join us for one of those and we'll try out some issues and discuss some bugs, and that is a great way to get started with the CIG. The first thing that we want to do today is walk you through some of the enhancements that we have in progress with CIG CLI, and let you know the status updates about each one. And the first one I want to bring up is KEPP 2775, which is KubeCuttle delete protections. So this is a very new KEPP. It's still in the discussion phase. There's a discussion actually both on the KEPP itself and on that CIG CLI mailing list that I mentioned, so go check those out. We'd really love to have your feedback. But at a high level, what we're trying to do here is make KubeCuttle delete safer to use. We have two ideas for this that are proposed in the KEPP. The first one is a new interactive flag that would, if enabled, require you to confirm the set of things that are about to be deleted before the deletion actually happens. So that can be particularly useful if you're targeting a large group of objects, say with the selector, to actually confirm that the set that is going to be deleted is what you intended. Pretty important. The second idea that we have is for certain versions of the KubeCuttle delete commands that have an inherently large blast radius, so the all and all namespaces flag. We want to add an artificial delay to the command for a certain number of seconds, and that will emit sort of a warning telling you that we're about to perform this very destructive operation to give you a chance to abort if that's not actually what you intended to do. So again, this is a provisional KEPP, this discussion in the KEPP and on the mailing list, and we're seeking feedback on that, so if you're interested, please get involved. The next KEPP I wanted to touch on is KubeCuttleDebug. This one's been implemented for a while, but it has a new feature called debugging profiles that I want to highlight. This new feature provides more configurability for generated pods and containers that the ones that are created by the KubeCuttleDebug command, making it more useful in certain use cases. For example, if you're debugging a node, you might want your pod to be created with the net admin capability. Well, now you can do that by passing profile equals net admin. Next up is server-side apply. This one's very exciting. It's graduated to GA and released 122. It's still not on by default. There's an ongoing discussion about when and how to do that. But if you just add that server-side flag to your apply command, then you will be using it. And I highly recommend that you do. Part of the point is as the name applies to move the implementation of the client side to the server side so that KubeCuttle, the CLI, is not the only keeper of this very important functionality. Since it's now on the server side, other clients can take advantage of it as well, which is pretty nifty. But that's not the only thing that came with this change. It also fixed a number of bugs that weren't possible to fix in the client-side implementation, so there's some inherent advantages to switching over. And it introduces this new concept of field ownership that allows multiple apply actors to collaborate better on the object state. So they can basically claim ownership of a certain field, and if another actor attempts to take ownership of that field as well, we'll generate a conflict so that it won't thrash anymore. You'll actually have a proper conflict detection in that scenario. Next up, we have KubeCuttle command metadata in HTTP request headers. Now, this one is adding much of metadata to HTTP headers about the command that you are running with KubeCuttle. So this one's in beta as of release 122. And it gives you more insight into what KubeCuttle commands are being run against your cluster, connecting those dots between the user actually running a command and the request that you end up seeing on the API server. This is a step towards getting a symmetry on KubeCuttle usage. This next one, the customized plugin graduation, is something that I'm passionate about as a maintainer of Customize. KEP 2953 is sort of an overarching KEP that describes a plan for customized plugins as a whole. To give you a summary of customized plugins, there are a bunch of different ways that you can use them right now. They're all in alpha and they have been for a long time. And what we want to do with this KEP is sort of converge them into a single KRM driven feature that has a better story for plugin distribution, just government trust than we have today. So it's another provisional KEP. We're seeking feedback on it. So if you are interested in this, please get involved. And because this is more of an overarching plan, it requires some new features as well, and we split those out into their own KEPs. Specifically, we have the Customize plugin composition API KEP. That one is the simplest and it is already moved to implementable. And then we have the plugin catalog KEP, which is one of the key features that is going to enable that plugin distribution discovery and trust that I mentioned. And finally, we have the KRM functions register KEP, which is proposing a SIGS sponsored implementation of a catalog of catalog generation. So that would result in a sub-project if approved. So that's pretty exciting. Please go take a look at those KEPs and give us your feedback. Last but not least, we have KEP 2950, which is adding sub-resourced support to KubeKettle. This one would add a new flag so that commands like get, patch, and edit and replace can deal with sub-resources like status and scale, which is not possible today. We're targeting alpha for release 123 with this feature. And with that, I'm going to hand it over to Eddie to tell us about building and testing KubeKettle. Cool. So we're going to run through this a bit because we definitely want to have lots of time for questions and conversations at the end. So KubeKettle is what we consider a staging repo. So this originally lived inside of the Kubernetes slash Kubernetes main repo, we call this KK. So KubeKettle code lives in KK currently, but we are actively trying to migrate it out. And this has been the process of a lot of awesome folks over the past couple of years. We're actually super close. I think we're trying to target 124 where we can release and version KubeKettle independently, which will be awesome. We don't actually take PRs directly to KubeKettle. They all need to be made against Kubernetes-Kubernetes directly, but we do like to consolidate our issues there. So to build KubeKettle, you just simply run make KubeKettle at the root of KK. Testing KubeKettle, we have a few different types of testing, unit tests, end-to-end tests, and integration tests. Unit testing is very straightforward. These are your standard go unit tests. We have a little bit of framework built around them to do this. You simply just need to run make test what and then pass a path to the module or all the individual files you want to test. This can also take in regular go flags to target individual tests. The quick thing to note is that these will run very quickly and do not require a cluster at all configured on your server so they're completely offline. So these are the ones you're going to run normally during testing, and if you make a PR that just makes a change to a command and doesn't have a unit test, I will most likely ask you to add one, so thank you. End-to-end test suite is what we run on our periodical tests. This tests Kubernetes, I mean, KubeKettle against a active Kubernetes cluster. So you do need a cluster to test this so you can either use kind locally or have a remote test cluster. The tooling will also help you spin this up and build one. This is built using Ginko, which is a behavioral-driven go test framework. So first you got to make Ginko, then you have to compile the end-to-end test suite, and then you have to run this nice, lovely, long Ginko command. Again, there's like a Kube test command that kind of hides some of this, but this is usually what I just find in my bash history and run and change the name of the test I'm trying to target. So last we have our integration tests. These are our bash tests. This will try to spin up a cluster. It will fail if you're on a Mac, so we'll usually have a local cluster configured. This is definitely where we need the most love in terms of our testing, so we hope to dig in here soon and kind of evolve it a bit. And to run these tests, you run make test command, pass in the name of the test that you want to go. With that, I'm going to hand it to Sean to talk about KubeKettle applying prune. A complete crud tool. Okay, I guess you can hear me now. So this automatic deletion functionality is called pruning. And so in order to implement this pruning functionality, what we want to do is we want to delete the objects which are no longer needed, which begs the question, okay, how do we figure out what's no longer needed? So the way we do that is we have to specify what was previously applied. So we're going to have two sets of objects. When you're applying a set of objects, you know what you're currently applying, but somehow we also have to specify what had previously been applied and between using those two sets, we're going to calculate what we no longer need. And that's called the prune set. And so you can see I put a little set algebra up there with a prune set. The way this is calculated is the previously applied objects that are no longer, that are being applied currently or subtracting those are going to be the objects that we want to delete, that we think that we no longer need. So I've got a little bit of a, oops, how do we go back? Uh-oh. Sorry, Eddie. I went too far on the slide and skipped my image on what pruning does. Uh-oh. Okay. Sorry about that. So this is a very, very simple example of pruning. So if we have an initial apply and we're only applying two resources, we're applying A and B, and then subsequently, when we're pruning, we apply B and C. You can see on the third image on the bottom, what we've calculated as a prune set is what we are not currently applying, but we had previously applied, which is item A. And so we prune or automatically delete that. So what does that command actually look like? Well, first, it wouldn't be a real presentation unless I showed you some YAML. So I've highlighted the name. So this is basically just two config maps, CMA. And you can see that they've got, they both have the label CM label, app colon CM label. And the second apply is going to be this, these two config maps, B and C. And here's where the rubber meets the road. So you can see in the first apply, we're applying those first two config maps. And our output is going to say, okay, great, A and B were both created. Now, here's where we get into pruning. So the second apply, which has B and C, we're going to add the minus minus prune flag and somehow some way we have to specify what we had previously applied. And the way we do that, simply in this instance is we say minus, we say the labels of app equals, app equals CM dash label. So that's how we specify what was previously applied. And you could see on the output, B, which was already there is unchanged. We've created a new one, the C config map. And then A was calculated to be no longer needed and was pruned. Okay, so there's more flags that have to do with pruning. And I've brought up this prune whitelist flag, which specifies the group version kinds, which we've restricted the items which can be pruned. And in this case, I said, well, the GVK should be config maps. So it's actually going to output what we've done before. But if you need to restrict the set of items which you might want to, that you're considering to prune, as the previous apply said, then you would do it with the prune whitelist flag and you would use minus, minus all. And you could see that the output is exactly the same config map A got pruned. So one of the consequences of this prune whitelist flag is your CRDs will not work out of the box with pruning. You actually have to specify in this prune whitelist the GVK for your CRDs if you want to prune your custom resources. Okay, so now we're going to go over some drawbacks. And there are some significant drawbacks. This is actually, you know, if you're deleting objects, this is dangerous functionality. And if you're doing it with, by specifying resources in a label retrieval, there are some serious, you know, there could be some serious consequences. And I've done it myself. I've actually inadvertently pruned or deleted objects because there were extra objects that have this label that I didn't realize that ended up getting pruned. So you have to be careful. Dry run is going to be your friend. So also, as I mentioned before, the GVKs that are allowed to be deleted or pruned are hard-coded and they're hard-coded for each version of coop control, which is a serious issue. Okay, so finally there are corner cases with specifying namespaces when you're pruning. So if you're applying into a brand new namespace and you're attempting to prune objects in a previous namespace, if none of the objects that you're applying now are in that previous namespace, pruning doesn't work. So as you can see, there's some serious drawbacks. In fact, coop control apply prune is in alpha and has been in alpha for many years. So in order to leave you on an up note, we're actually attempting experiments in the CLI Utils library, the CLI Utils repository to get better pruning functionality. We recognize, I guess the last thing I would say is we recognize the deficiencies in our current coop-cuttle prune implementation and we are actively working on experiments to improve that pruning experience. So with that, I think I'm going to bring up Eddie for kind of the last top of mind. Thanks, Sean. So before we get into questions, there's a few things that we're always thinking about as a SIG that are top of mind. So we currently use a library called Cobra, which is a very common go CLI library. Lots of CLIs are built with it. We want to refactor our cube control commands so they do not directly depend on Cobra as a library. So right now, take apply, for example. The apply command takes in a Cobra command struct to actually do anything. We want to pull that out and we want people to actually consume the command code as a library. So if you want to use apply inside of your application, we want you to be able to do that without using Cobra. So we have a few reworks of what we thought that looks like. This will be a great place for new first issues once we're ready to get started. So start joining us in the SIG meetings, please. Another one is how we better handle flags. This one comes up very often. Cube control create is a very imperative type of command, specifically with how we pass flags for the resource. So users want every single flag possible for every single component of the resource to be able to be passed on the command line. And that's very hard to maintain. Taking structured data and turning that into command line flags is very, very difficult, and it's not something we want to do. So we don't have a solution for this yet. The community wants support for it. So we're kind of at a stalemate where we don't know what to do. So if you have ideas or suggestions, please let us know. We want to support folks that have valid imperative use cases, but at the end of the day, Kubernetes is supposed to be a declarative driven system. So give us your ideas, please. Cube control performance is something that we need to dive into at some point. When you're talking to very large clusters, the memory usage balloons, because we go from JSON to YAML, then back to JSON, then to, like, GoStrux, and there's a lot of serialization in conversion that happens because Go is a strictly typed language without generics yet, right? So Kubernetes is a very typed type of API operation. So there's a ton of serialization. So if anyone is an expert when it comes to profiling Go applications, we could definitely benefit from your help. So feel free to swing by and let us know. We'd love to get that down at some point. And new contributors. We love welcoming new contributors to our community. Great new contributors sitting over here. New grad, very active in the community. So whether you're a student, whether you're a professional developer, we love having you come by. Cube control is a very difficult code base to jump into. It's just the unfortunate nature of everything Cube control touches is inside of Kubernetes for the API server. So there's a lot of different types. We definitely need help solving some other problems, but it is a difficult code base. But if you're a dedicated and you really want to learn, we'll spend the time to work with you. So with that, we definitely have some time for questions. So we'd love to hear your feedback, comments, things we should be thinking about. Of course, Cube Kettle isn't the only code base that we work on. So if you're looking for other opportunities to contribute, still come talk to us. Let us know what you're passionate about. Customize, I know has some opportunities to contribute it. And the functionality that Sean was just talking about, as well in terms of making prototypes on new apply, we have some more experiments and stuff like that that you could potentially help out with too. So don't hesitate to reach out to us if you'd like to get involved. So I think we have a mic for questions. We have some online. Nothing online. Anyone have questions, comments, suggestions? No one here. So the question is if I want to use the apply functionality from Cube Control, can I import that into my application? You can right now. It's not the best to use because there's that tight Cobra dependency. We also don't support versioning it right now, right? So there's no guarantee that we won't break something on you. Once we get to the place where we feel it's ready to be consumed as a library type of code, then we would figure out how a support cycle works. But as of right now, you absolutely can. But if we break something, we're sorry. That said, for apply specifically, you should probably leverage server-side apply. Yes. We also have an online question. Where's the best place to get started in contributing? Customize, so best place to get started in contributing. Customize has a ton of great issues. The Customize team is definitely in need of some new folks to come in who are really passionate about the project. So feel free to jump in there. Any other thoughts? An easy one is to come to the bug scrub, I would say. So we have those every other week. You can come. And if you have an issue that you want to ask about, that's a good opportunity. But it's also a good opportunity like you'll see the issues that we're talking about as we triage them. There's opportunities to claim issues every single week that we do that. So that's a good way to get started too. Yeah, we'll often say, hey, this is a great issue. Does anyone want to take this on? And it will kind of go silent when we don't have anyone to stand up. So yeah, that's a great time to come in and find an issue. Yeah, I think we should really emphasize that actually showing up in person to the meetings, it takes it to the next level. It's really hard to interact just through online, through the issues. If you can, I know that it's not easy. It's 9 a.m. Most of the meetings are 9 a.m. Pacific time, which doesn't fit with a lot of other contributors. But if you can show up in person, that actually is, I think it's easier to get involved than through just the issues. And that's something we definitely want to change and we've tried to. It's just very hard to keep track of all things they think. So the unfortunate reality is the meetings or DMs on Slack too, especially if you're in different time zones. And we do use the new contributor, our good first issue label on many of our repositories. That's another thing you can look for. Any other questions? I see somebody has an Eddie's and Eski fan club shirt on. I have a question. How do I get one of these Eddie's and Eski fan club shirts? Cool. Well, we'll hang around. We really would just want to hear feedback. And we mentioned the cube control headers kept. That's something that we'd actually like to talk about at some point where currently the cube control can send all the command information via headers to the API server. It's currently not really persisted anywhere to logs. But we would love a way for operators to opt in to sharing that data once we write it somewhere. So we'd love to know how you're using cube control. We can figure out how to anonymize it, take out anything PII-wise. But we'd love to know where to put our efforts in terms of commands and plugins. And how are you using cube control? Cool. Thank you for having us. Thanks for coming.