 Hey everyone, thank you very much for showing up. I know we're we're the last session of today, so Halt heart well heartfelt. Thank you for for actually showing up this late. Yeah, everyone's stoked to learn about a cube control, huh? Yeah, that's awesome So I'm Eddie Zaneski. I'm joined by Machay Shulik. We are the tech leads for a six CLI There's a few others of us on the CLI leadership team And yeah Yeah, six CLI is the special interest group for the CLI tooling of the Kubernetes project Kubernetes is divided up into different special interest groups and each one is responsible for a different part of the project So we own the CLI tooling We had a release 129 go out a few months ago, you may have seen it Couple things we shipped we had sub plugins that landed in cube control For create so you can actually create sub commands that have your own create Specifications you can do like cube control create my thingy and you can build a plug in for that We also had some stuff that shipped with help and weight 130 is we're currently in I think beta zero is what's cut. Yep. Yep. Yep We are in the 130 release process right now. So you should see it in the next couple of weeks We have some pretty cool features that are landing in alpha. We have a Custom profiles for cube control debug. These are you can specify different resource profiles and sec comp profiles that you want for your Debug containers. It's a long ass feature. So get in there play with it Give it a try when you when you get 130 dropped Another big project that's been worked on is transitioning from speedy to web sockets Speedy is the kind of what htdb2 called before htdb2 was a real thing. So it's super old super deprecated It's used everywhere for all of our streaming and long live connections inside of Kubernetes There's a big effort that's been going on shout out to Sean Sullivan for cutting that over to use web sockets for a modern approach You will see that land in beta, which is awesome And two big stable features that landed cube control delete now has interactive delete, which is super rad I'll show you that in a second and we finally moving aggregated discovery to stable, which means that your discovery time should Like go down. Yeah, like significantly down I'll talk about it in a minute when we're going through major features that we went through but awesome Cube control delete dash I Take a look at the slide. How many times has this happened to you? Were you accidentally deleted production do not delete me and by auto completing accidentally? As soon as 130 drops and right now behind an environment variable You can toss a dash I on delete and it will prompt you before you actually really anything So you can stop yourself from shooting yourself in the foot Just get in the habit of tossing a delete dash I every time you do delete and you will have a great time We want to make this a default behavior at some point So we're exploring ways to do that But we can't do it by default because it'll break existing pipelines kind of a long thing But yes, just get in the habit tell all of your co-workers isn't in friends to use dash I and you will be saved Yeah, especially that we will be also saved by not having the issues and respond to oh by the way I did kubectl delete all all and it literally broke Break my cluster, which is like kind of doing RMF RM RF Slash, so you remove everything. Yeah, get in the habit of doing it. Just tell everybody You want to talk about customized go ahead customized shipped a bunch of awesome new stuff We have a new sub pro a lead for this customized sub project shout out to Hugo Cobalt on github. They stepped up as the new lead for customized some new maintainers and new sub project approvers shout out to Varsha Marine and Nick There's big Performance increases that are coming to customize for really large manifests. We're improving UX and documentation There's planned helm support so they plan to ship full support for helm which is awesome So any folks that depend on customized helm hopefully you can do a lot more fun stuff together Show up enjoying the customized group and show up on the bug Bug scrub triage. We'll talk about it a little bit later And if you're not on the latest version of customized do upgrade to it. So look for that Okay, so Ten years so we want to spend a little bit of time and Focusing on what ten years means for a 60 li primarily So a little bit of quick history around down back in 2014 when cube was released 0.2 around September was the first tagged version at least that's the one that I found for those that hasn't been around At that time at that point in time cube had only one API group there was no apps policy I don't know what we have our back Storage and all those various groups that you're familiar with today. It only had one It was v1 beta one and it only literally had what we have six resources that you see on this on the slide If you're not familiar with them replication controller, it's like replica set. It was renamed with some slight changes It still exists because it was g8 back in 2015 so we will not remove it But minion was changed But it would it happened right before 1.0, so does anyone know what minion is called now? Yes, it's not So there was a lot of changes a lot of thermal PRs were flanked like crazy mergers Everything like you literally had to rebase your PRs every every 30 minutes Before 1.0 there were releases 0.0 something 0.0 something every single month more or less and That could also be seen from the user point of view because we quickly went through v1 beta 1 to v1 beta 2 through All the way up to v1 beta 3 eventually Settling on v1 to which was released with the first version of cube a Lot of those changes also required us to write a lot of the code that is currently called API machinery So all the conversions all the changes allowed us to modify The API surface interesting changes. Yes, minion was renamed to nodes. I think that was around 0.5.6 the interesting one from our perspective and I I'd be very happy to hear it does do we have anyone that tried cube CFG? Before it is it was called cube cuddle Okay Surprisingly if you do Google for cube CFG There is a thing that is called cube CFG these days. That's not the thing that I'm talking about There's like 1.0.5 So that's towards the end of 2014 early 20 15 when we renamed the project and gave its current shape Out of interesting changes basic authentication was introduced before 1.0. And so we hit the 1.0 around July 2015 like I said API we reached v1 For backwards compatibility. We also shipped v1 beta 3 so people can rely on both Additional changes that were somewhat notable were already at that point in time. We had pluggable scheduler Although it did require you to recompile the entire Kubernetes to include your plugins and some basic admission Plugins that you might be familiar with today Post 1.0. We've seen a bunch of changes. Some of these might be familiar. Maybe you worked with Kubernetes when these were introduced Support for get extensions plug-in CLI runtime That was a big change in cube CTL where we had a plug-in support and CLI runtime is kind of like our abstraction on a runtime library for How we do CLI We added customized to cube CTL Which we'll be talking about at some point in the future yeah expect some announcements in the Sorry in the upcoming months there there will be if you are using kubectl customized specifically or Generally customized embedded within kubectl. Please. Let me know. I'm I'm I'm happy to hear from you There will be some changes coming in the upcoming months and years Yep, and then keep control the bug that dropped in I was 118. It was 118. It's been quite some time Other big ones that landed service site apply which we will talk about a little bit later You want to say anything? I mean We're doubling about how to do kubectl apply that we currently have and how to make it work with the service site apply there are Issues with the client site apply Primarily the fact that it has limitation with regards to the size Which Eddie was asking me about it and I'm like, yes, I'm fully aware that we have an Limit on the size of the annotation that currently the client side So if you have issues that you run in with client side, please switch over to server site apply. I Don't expect you run into the limitation. That's the first thing very likely. It will be also Much more performant than client side apply We're trying to figure out what would be the best way to transition people from using kubectl apply over to the server side apply the problem is people are Used to using apply and We were thinking about maybe switching apply from the client side to be the server by default There's still currently you have to use this dash dash server side of flak And There are discussions whether we will introduce entire new command And we cannot agree on the name because I support the idea. I don't like the name if you're curious we looked it up earlier today in August last year we discussed it. It was proposed to be called actuate, but I don't I I Will be stronger. I hate the name. It does not mean what it's supposed to mean. I'm a I'm not a native English speaker it doesn't mean anything for me apply is straightforward if you can come up with a better name That will be awesome as well. We were going through synonyms with Eddie earlier today We had some interesting ideas mere kubectl smear. Yeah, there was one which one I like I can't remember there was yeah, there was like delicate was one of them if I remember correctly And Similar so if you have some ideas man, man was one. Yeah, I like that one But yeah, if you're not familiar service side apply versus regular client side apply It stores the basically the whole entire specification your whole YAML as a last annotate a last applied annotation on your resource So it's actually doubling the storage size of the resource and it does this like three-way diff Between what's in the annotation? What's the current state and what is your desired state? You're applying and yeah Yeah, and then and if people will start using Clients that apply with a I don't know create or debug or edit command roll out undue That messes up the entire thing completely Whereas the server side apply is a little bit better with regards to that because since all the operation with regards to the The apply are invoked on the server doesn't mean whether you invoke create Edit or something the history is being preserved and there's even information about which manager as in which command or who? Invoke a particular change out of the other and notable changes that we also did over the past Well, that is roughly past year year and a half Open API v3 Which is primarily the server side changes you probably don't notice too much of a difference But basically it open API v3 allows us to have more descriptive information about the Resources that you are serving so the primary use case for that was for CRDs and For CRD authors to better explain what the field Requires and expressing that information then in for example kubectl explain where we are exposing that information aggregated discovery, which is what Eddie was also talking earlier today is a So normally when you Invoke any command any kubectl command the first time what it does It will try to figure out what kind of API your cluster is using to do that It will try to iterate over all the API is available So it will ask first endpoint API and it will get information back And then it will start to traverse the tree of the API the other endpoint is API with an s at the end Which has all those apps batch Are back and all those any additional groups and it will traverse that instead So if you imagine Based on what I was saying it is a lot of traversing and it a lot of requests that it had to do initially We noticed that This is very problematic both from a client the side of things especially if you have too many CRDs but also from the From the server side because it has to respond to so many requests So we came up with an idea of having all those data Combined and you do only a single request to the discovery endpoint and it presents you with the whole Information about all the resources that the cluster provides which obviously speeds up Both the server side processing but also client because it's only a single request Yes, the data that it has to reach is More or less of a similar size, but the fact that it only does in a single request Significantly Lowers the load on the cluster and on the server and it handles each cache separately So when it has to renew a single resource or a single group It doesn't have to fetch the entire surface of your API So if you use a lot of CRDs with something like crossplane you should immediately see massive performance increases These are the folks who like really needed that work done. So yeah, the other two we already covered so we're not going to repeat ourselves Other in notable changes that they want to highlight around 111 Was we did a major effector ink off the code now the initial version of cup at all they worked nice But each command since it was written by a different author. It looked differently. It was tested differently We had a nasty CLI factory which at the time was a reasonable solution over time it aged very well We had to rewrite and scratch a lot of those bits That was also the time when we introduced something that is called generic CLI options, which contains a lot of the Similar tools that we just using between various commands. If you're a plugin author for kubectl You probably are familiar The generic CLI options is one of the packages of the CLI runtime library that Eddie was mentioning and one of the ideas aside from That was accompanying those refactoring was to make the kubectl command So the pattern that I that I've been talking about a couple of times if you've seen kubectl walkthroughs or if you've ever looked under the hood and any of the kubectl commands It has always the complete validate and run methods. That was one of the efforts that happened back then Which first of all it allows better Composibility of the commands we can actually embed commands in other commands for one such example is the weight command is embedded in in other commands Delete delete is one of them. I can't remember anyone else But additionally it allows and simplifies testing the previous testing was was a nightmare the CLI factory If you remember what I said a minute ago Yeah, so you had to instantiate this entire testing factory It's a mess. There are still some old pieces that still use that I would like to eventually get rid of them But that requires a little bit more investment Couple other sub projects that we adopted post the 1.0 Customized which we mentioned crew in the crew index people using crew I think the crew index that's the plug-in manager for kubectl. I think it's crossed 200 almost 300 plug-in. It's unbelievable how crew index Shoot up when Ahmed initially wrote it I remember talking with Ahmed and we were both presenting the the kubectl plugins when I initially introduced the mechanism around 112 113 and We had a joint presentation back in Seattle And and that was the time when Ahmed started actually working on crew and then crew index slowly and Over time how we were seeing it just skyrocketed So we're so thankful for all of you for submitting PRs and adding your your plugins to the crew and crew index Yeah, Kui was another project. We adopted as a sub project. This is donated from the IBM folks It's kind of like a approach to take kubectl and turn it into a GUI So you should check it out if you haven't seen yet. It does some like parsing of the tables makes things interactable KRM functions and kubectl validate You want to say anything about those functions? It kind of stalled kubectl validate is like a newest addition I I will admit that I forgot about it. It's existence. I literally had to look it up. What have we done this past year? kubectl validate is actually if you remember the open API v3 that I was talking about a minute ago Open API v3 aside from having very detailed information about the resources It also embeds information about the validation logic previously a lot of the validation logic If you remember kubectl has a dash-validate flag which gives you some feedback about oh your resources this and that Misformatted or something is wrong The kubectl validate is actually reading that information from the open API v3 Which is much which allows much more thorough expressions with regards to what? What the shape of the object should look like this gives a lot more flexibility to the CRD authors and Then by kubectl validate which is another plug-in for for kubectl You can actually consume those those information and then have for example use it in your CICD pipeline To validate their sources that you you are committing as one of like for example pre-submit step Some things we had to deprecate convert we pulled that out into its own plug-in That's to convert between resource versions. It gets released separately right now We got rid of the run generators. These were specifically like artisanal handcrafted Little commands to generate a specific who remembers kubectl run when if you look at its pay on a health page it had two pages of Man and all the flags that it actually had Yeah, so there's a couple of folks. I was behind the I So initially the story was like we want to have something similar to Docker So docker had Docker run and whatever so we figure out Oh, we'll do something similar in kubectl So we started adding kubectl run and then as people were adding deployment jobs I added jobs and I also added the generator to the kubectl sorry So eventually over time it just grew to this monstrous Command which was a pain to work with and try to figure out even at some point in time There was depending on the flag that you used depending on the version of the cluster that you had It was producing the various resources It was an art to figure out what you're gonna get So we were like, yeah, that's not gonna happen more anymore And we started slowly deprecating all those stuff I know that a lot of people complained because and I apologize a lot to Too so many of you that had to struggle because I know a lot of people were actually building on top of kubectl run For there is a CKA exams and so forth. I Apologize for that, but this way we actually Succeeded in making the code much more simple and much more composable and there are different there are other commands which are much more better designed to provide those basic Creation commands, so there's a whole set of create subcommands. I Don't even know what export did do you? Yeah, of course. I know I ripped it out as well myself So at some point in time kubectl get has enough had an option dash-export which basically allowed you to Export your resources if you don't know what by expert you meant that basically mean We will give you the The same resource kind of like dash. Oh yum. I think yum or Jason was there by default carmer exactly But we would strip off some of the fields that were The the goal was to allow you for exporting that resources I don't know for reuse or something there a lot of people started using this as either a backup tool as Templating mechanism and we started getting requests about oh, but I want to remove the metadata fields I want to remove status fields and that different request was I want to remove these other fuels I want to have yet another set of fields removed and we were getting more and more requests Which one was completely different than the other and at some point in time. We were like We don't want to play in the game because each and every single person will have different use cases for that And we would prefer you just do get YAML and Then using an external process to cut whatever you don't care about That will be the best approach for you and for us So we started the application process of this of the flag and Parallel to that because the export flag actually was invoking a separate endpoint in the API So also in parallel to that we had to remove the server side Functionality of the exporting so that was a relief to both the CLI maintainers and the API maintainers This staging work. This was originally the Kubernetes was all just one big giant repo one folder We had to slowly start moving projects out to their own repositories So if you ever come to make a contribution or a pull request to kube control You'll notice that we don't actually take PRs to the kubernetes slash kubectl repo It's what we call a staging repo and the code gets copied over and synced to that repo But but open issues in the kubectl repo so or we'll move them So yeah, we'll remove them But it'll be easier if you just open it up in the kubectl repo But that was a big effort that took a lot of work from a lot of people too And it was long time and that long time was one of them was The fact that we had to deprecate the convert if you're curious why convert go ask questions after the talk I'll be happy to talk about the internals of convert Do you want to talk about any of these we got like I mean it's like some some trivia stuff like When caps were introduced to kubernetes enhancements proposal out of the interesting stuff If you're not familiar why kubectl cup a copy is so limited and functionality these days It was actually not like that It was pretty powerful with regards to how much data you could copy with it But it all ended up with a bunch of CVs that we had to scramble At some point in time, I I found it out yesterday again. I had a proposal I think we even merged it. I had a proposal to drop entirely kubectl copy from kubectl Oh, I remember this. Oh, yeah, we were like, okay We don't want to deal with additional more CV ease. So we'll just drop the entire command entirely and it will be simpler We eventually figure out that the best option and that's what we were currently working with Is we will allow very limited and you can only copy a single file And we made all tar take all the blame for anything that goes wrong. So love you tar. Yeah, because There is a problem that Eddie will be talking about in a minute. Yeah So some future things that we're looking at and want to work on We have a cap. We've talked about it for a while. It's getting some progress This is to separate your cluster configuration from your client configs So you should be able to set preferences for your your kubectl your client locally and then that is not tied to your your cluster Configuration, so you sort of leave interactive default. Yes, and so this will be used so you can default kubectl delete-i To always be opting in we're very sensitive to breaking things because there's so many pipelines that kind of auto update and grab the latest version Of kubectl that any breaking change we make would probably break everything So that's where that's coming from. So look into that if you're interested We also want to roll out support for multiple kube configs I will make it easier for people with working with multiple clusters with aliases to kind of adopt If you have ideas, what it so we want to do it, but we have no idea how to do it I'll be honest And we're very aware that this is a problem if you have some thoughts And you have a little bit of spare time to figure out what we could do around this area because I know that everyone Like yeah, I want to do I want to have like 10 kubectl kube configs might do it for me Yes, I if I had a solution we have a lot of thoughts about various things. I don't have idea how to do it yet That would be smooth and and and use usable. So if you have some thoughts You know either let us know after the this presentation come hang out with us or meetings Yeah, exactly. I mean there are multiple options and yes, we want to hear from you what we can do to make it better It doesn't mean that you have to if you don't know how to write that's totally fine Even showing up and selling like proposing what it could look like is a valid approach We mentioned earlier. We want to figure out a new version of apply that's server side by default So kube control demand something like that definitely not actuate or whatever it was Jason path is super confusing for a lot of folks because we the Jason path that you probably know It has a bunch of utilities and helper libraries built in that deviate from the actual RFC of Jason path So things like length and all these other little functions are used to aren't actually part of Jason path So the Jason path that we have built into cube CTL is strictly RFC bound So it's missing all this functionality So we want to find ways to kind of give people the stuff they're looking at we've explored the idea of adding cell support to Cube control and other parts of the Kubernetes project. So give us your thoughts on that We mentioned we want to do a new implementation of kube control copy We actually want to move the the copy API down into the container runtime So it should be a API call to the kubelet to say hey give me this file out of this container or put this one into it So we're hopefully going to get rid of all the other CVs We have our own copy by building that as a first-party API. Yeah currently under the covers. We're basically doing a tar and then we're Piping the input and output on both ends. So you're Your shell with whatever the container has and we're just streaming the tar contents and we're Extracting it back. So one of the requirements is that you have to have a guitar working in the container to be able to use copy Or of some sort because on the client on the kubectl side, we have the tar embedded in the in the binary So that's not a problem over there But yeah, it is a hard requirement for the for the container to work with kubectl copy Yep, and last thing we say no to a lot of things But what we say most two is probably adding new flags If you haven't done kube control get help or apply help recently You'll see lots and lots of help output Everything we add it just adds so much burden to learn document keep track of fixed patch So we're really trying to not add as many flags as possible, especially short flags like one letter character flags We are definitely trying not to add at all because yeah, if you remember what I said about Keep color run before those two pages of math that is not acceptable and we don't want to repeat that mistake and also Adding a flag at this point in time. It's like, oh, yeah, it's simple the fact that it took me two years to get rid of all those flags from kube color run and the amount of different Things that I've heard about that and seen about that is a separate topic. So Be mindful that every time we have to add a thing. Oh, it's just but this is just another flag Yes, it is another flag, but it also means a Increased burden on us the maintainers of that particular command. Yeah, so we say no to that a lot and that's why so nothing personal If any of this stuff interests you you want to get involved come hang out with us We have meetings every Wednesday. It's 9 a.m. Pacific time, which is 6 p.m. Central European I literally put it like the first thing just to make sure that people from around here Can get it like and feel it for them So every other week we have a SIG meeting where we talk through an agenda talk through stuff And then we alternate every of the off weeks for a kube control bug scrub or a customized bug scrub So those are great for you to come in kind of learn how we look at an issue talk through the history really triage it together It's a great way to onboard into the project. So come ask questions if you have or Walk through your particular issue that you have the meeting is a perfect place to talk about for example How we could approach the topic of multiple kube config files Yeah, so also if you could scan the QR code Re-rate the session it helps us be able to do these still as maintainers So thank you for doing that and that's what we have we got five minutes for questions and we'll hang out afterwards Oh, we're last session We can probably run a little bit longer than five minutes unless they kick with no all they are already saying Kick us out. All right, so who's got questions. We got a mic here Hey, hello First of all about the still convert you mentioned that you can Describe they why this was deprecated. Okay, sure. So keep kind of convert has To be able to convert from Any particular API version so it let's imagine that only want to convert a particular resource from kube batch v1 beta 1 to batch v1 what it requires in all the API machinery within core kube has An internal version internal version has all the fields from both the batch v1 beta 1 and batch v1 It has to traverse through the internal Resource and then back to the version that resource The way kube-cable is currently written it is that it does not rely on those internal APIs because they are part of the core Kubernetes so the main None of the staging repositories when we were moving to the staging We needed to rely only on the kube provider library. So for example API client go and all that Those libraries strictly rely only on the official published API not the internal ones So converts still exist, but since it relies on the internal APIs it exists in the core In the core Kubernetes code so that it has access to both the external APIs and the internals to be able to do the The path Okay, so it's a deprecated but still usable. It is a deprecated because we needed to It was deprecated out of the client directly It is so you don't have it by default as a kube-cable sub-command But it is available as a separate plugin. Got it. So it's only just the placement It's not part of the kube-cable main repository. It's part of the KK Kubernetes Kubernetes main repository That was the reason behind the deprecation. Thank you Hi, you talked about the fact that in kube-ctr you can run multiple commands from an over command But like one command can hide over once is there a chance like because it works with weight that you can have something that Stays over time running that we could see at some point kube-ctr customized be able to deploy for example a CRD and an object that depends on that CRD Even if it requires like an explicit dependency between two customization. I Don't expect us in the law in the near future to do anything like that I would be inclined to expand the weight function that we currently have I know that there's a lot of people asking for I would like to be able to ask for this or that I know there is a bug open. I think there is a bug open Where people complain that about the fact that if you don't have a resource defined yet So you're cut you haven't submitted your CRD kube-cable weight will fail or When your pot is not Created and you invoke a weight on the creation it will fail That is definitely one of the things that I would like to see being fixed and which should be Able to handle those situation although we probably want to balance that Balance between it's not created and oh you made a typo Which happens from time to time so that's one thing But I don't expect to have like so sophisticated workflow. Like you just said separate commands. Yes We still want to maintain Rather short-lived command short-lived to some degree And we're trying to follow the the UNIX principles do one thing, but do it well Okay, thank you. Well with that. We're out of time We'll be happy to take any extra questions up here and hang out afterwards. So thank you all for coming