 Okay, welcome everyone. My name is Machi. I'm joined here with Eddie and Marley and We will quickly go through a round-to-world tour for what 60 li has been up to for the past couple of months Our intention is to answer as many questions as you might have so if you already are thinking through the question That will be the best option. We try to go through the slides rather quickly To be able to answer your questions So six here I compare is currently led by four For people Eddie and myself are here. Unfortunately Natasha and Katrina Could not be with us today And What six you I basically does is we are responsible for development and standardization of the Broly said CLI framework so kubectl and all the libraries and tools that are Built to help you build your own CLI tools not only kubectl, but literally everything else including plugins for kubectl The sub project that we own as I mentioned aside from kubectl customize and crew There's also kui, which is a graphical you enter interface for for kubectl There are libraries like CLI runtime and CLI experimental which help you and also us to build the CLIs that are Comfortable with with kubectl and couple additional sub projects You can find us on on Kubernetes slack under hash six CLI. We also have a mailing list both of these are rather Low volume so feel free to to jump and follow around and ask questions or propose Topics if you're a person that prefers in direct communication We also hold by weekly meetings on Wednesdays all alternatively On the other Wednesdays. We are holding either customized or kubectl box prop Which is a goals also an amazing place to to join and if you're interested in working and picking up particular issues That we're going through. It's also a nice place to to join us Okay, so we had a couple of releases since the last time Release 1.27 had some enhancements. We added default container annotations that are used by kubectl as well as sub resource support for kubectl Aggregated discovery helped like speed things up. We pull an API server that has a ton of resources on it. It takes forever Open API v3 also came for the explain and we got Apply prune got redesigned and we also have the kubectl create subplugins So some of the highlights from 1.27 is debug. We added a couple of new profiles for it Didn't we have sys admin also Is that that one's for the next one? Okay general baseline and restricted and there's some more coming later as well and then we upgraded the packaged version of customized that comes with kubectl to 5.0.1 and Promoted the alpha Who am I up to you? Off yeah, so off who am I and Finally, we had kubectl different behavior Corrected to actually use the selectors properly and that was back ported to others other versions so that they could make use of it as well 128 had a couple of things come in we had events and then we had the interactive delete flag So if you've ever accidentally deleted everything in your cluster before You might want to start using the interactive delete flag so it will tell you like do you want to do this for real or not? Yes, Brian added some cool stuff So port for it before we just take whatever pod which could sometimes result in the terminating pod being picked so now we look for one that's Actually running instead of a terminated one and diff added the concern concurrency flag and we also Made weight a little bit easier to use by making the value at the end of the JSON path expansion optional and then 1.29 wait, oh Sorry, we promoted the interactive delete flag to beta in 1.29 and as well as the create sub plugins and Yeah, there's been some improvements to the help messages, which has been really great and We added support for simple filters for kubectl weight to match field contents Yeah, just a couple future things that we're thinking through In progress right now. We're working on transitioning from speedy to web sockets Anyone know what speedy is Yeah, it was like this transitional protocol between like htb1 and 2 And so speedy has long been deprecated, but the Kubernetes API server completely depends on speedy So we're kind of pulling that out and hopefully transitioning that over to a web socket approach This is what's shared partnership with the API machinery folks Another thing that we have in progress is separating out our cluster configuration from user preferences You may have heard us talk about this for the past two years. I'm sorry Marley is actually working on it now So hopefully it gets done But the idea is that you know when your cloud provider ships you your cube config You don't want to have to blow away or merge the one you have you want your you know Preferences to kind of be separated So that's the goal there where you can do things like opt into delete confirmation by default So you don't have to remember to do the flag. So that's the thought behind that some things are exploring and we'd be happy to Talk more about this during the Q&A part We have gone back and forth on the server side apply thing for a while And we arrived at the place that we can't switch to server side apply by default without breaking people and everything It just kind of changes behavior, but it's the behavior we want and we are intending to happen So we may need to have a new command so we don't break everyone Kubernetes project is just terrified of breaking your pipeline that's been running for 40 years and you're using cube control But updating it and not reading the release notes. So That's something we're thinking about We are looking at JSON path improvements or alternatives our JSON path that we use in cube control is not a real JSON path implementation It's a library that does most JSON path syntax But the the functions and utilities like length and stuff that you're used to from other libraries don't exist So a lot of people just come in you're kind of limited there I think cell is becoming very popular the common expression language from Google You might see cell popping up in a lot more places. It's much more fluid and you can do a bit more with it Yeah And the other thing is please know more flags if you've ever seen the cube control help page It's hundreds and hundreds of lines long We say no to a lot of things and we don't like to say no but we say no because we have a reason as maintainers and adding flags is just a pain point that we've had for years because that help text keeps growing and flags are permanent where we can't like revoking a flag that exists breaks people and so we just we're trying to figure out ways to add features that people want without having to Add another flag for it. So those are the things we're thinking through a Couple major themes from the sub projects that are also under our umbrella is from customized We did mention that we pulled in customized version five into kubectl But that was also a big thing to work at the beginning of this year There's multiple Enhancements that have happened over the the past year that are listed on the slide But the most important ones that we want to shout out are to the new maintainers Anna and Yuko Both of them are have joined in the spring the maintainers group the maintainer training cohort that Natasha is leading and over the past couple of months they Grow in the ranks they stepped up and they were recently promoted to maintainers for the customized project Which is an awesome thing because we have more people interested in and and that are actually helping us with with pushing customized forward couple highlights from from crew crew is basically our Plug-in manager. There's a bunch of new Kube cattle plugins growing the graph nicely shows how how the numbers are growing with every single day basically We've been giving this PSA also for years and you're gonna hear it again. I'm sorry There's a big difference between declarative and imperative workflows when you're using Kubernetes The idea is that you're not supposed to mix these you're not supposed to mix your imperative with your declarative We want you all to stay with the declarative world You may be asking what these are Imperative commands that you may be familiar with anyone use any on this list on the daily Don't be shy. I know there's a couple of you We want to hear what these commands are and why you need them and why you can't use things like apply and other stuff So ask us questions talk to us about that But we're trying to move away from these commands They kind of just break the whole world the only command you really should be using is apply whether you're doing this with GitOps or other bits you should be hopefully using some kind of GitOps pipeline to apply your pipelines But yeah roll out undo kind of screws everything up and the reason behind this is that there's a Annotation that gets added to your manifest called the last applied annotation And when cube control applies doing a diff and figuring out how to transition from one state to the next if you make a change Outside of that last applied annotation. It's not tracked. So it actually doesn't know what it's transitioning from into So it kind of just screws up all the things and we tried to handle fixing it for years And now we're just saying stop doing it. So use apply, please That's oh, I don't know if anyone saw the keynote this morning not to throw anyone out of the bus But they violated this so that's why we added this back in to keep talking about so please use apply Don't use create And then there's lots of docs on this so TLDR, please use apply And that's all we have prepared we'd love to have conversation discuss answer questions Make sure you fill out the code Give us good reviews. So they let us keep doing this as maintainers And yeah, who would like to start with questions or discussion? We got a mic so in the Right there in the middle. Yeah So please don't be shy. Tell us what's what's painful to you. Tell us what you want to change fix This is very useful for us as maintainers because we don't get to act with the broader community very often so I'm gonna start calling on people if you don't someone doesn't volunteer Come on Yes, we got a mic here if you could thank you Hi, I'm a huge fan of both customized as well as kpt another prop. Hey, someone knows what I'm talking about nobody ever does And really interesting both those projects use KRM functions, which I'm actually a huge fan of but I did notice There doesn't seem to be too much activity around there. I just wondering if we get a status update on If those are gonna be prioritized or deprecated or Yeah, I was talking with Natasha Natasha is currently The primary and soul maintainer behind customized There has been discussions around the carrier and KRM and in general customized the primary focus within customized currently was on that Training cohort because Natasha was Facing a situation where she was the only one and Wanted to primarily focus on getting more contributors She put together a roadmap where this is like the highest priority for the next six months more or less After we're done with that She put together a couple additional Topics, I think I remembered there was a mention of KRM and I would probably just say that checking out the roadmap that is I Was recently added to the customized repository. So go in at E6 slash customized There's a roadmap file that was updated like a week ago roughly. She was talking about it during our Last Wednesday's call I think I remember seeing KRM function. So that's probably that or additionally Asking on the customized slack channel about the the future of KRM Especially if you're interested in seeing this moving in a one or the other direction and giving that kind of feedback to Natasha and the folks will be definitely very helpful because it will it will provide her to make the final decision being able to say how many people actually use it whether that's something that we want to slowly phase out or We actually should invest because a there is a growing interest and B There are people who are also capable of supporting her with the work that is required for KRM Yeah, there was a lot of traction at one point I think was a collaboration between Google Apple and there was a third one. You remember who? There was a there was a bunch of interest and then everything got real prioritized We had the wave of tech layoffs come the person who was leading the project got Slashed down So yeah, it's one of those things where if we have people interested want to push it forward We'd love to have that for sure also tell your companies to hire maintainers. Thank you. Yes. Thank you I work with a team that is constantly getting new people on boarded into Kubernetes I know cube cuddle has a few instances where Say your cube config is world readable. It'll warn you. Hey, this isn't safe. What are your thoughts on? Using a similar approach for imperative verse declarative Application just so I can when I have new people come on board they can understand that hey This is you can do it this way, but it's not necessarily best practice The that's a great question the We've we've messed around with warnings a lot in like different forms And what we've realized is that people get really annoyed when you keep warning them on things So we've tried to do times where like we like maybe warn you once and then set like a flag somewhere in a cache file That's like oh don't warn them again We would love to warn all the time Would your engineers do you think they'd be annoyed if every time they used a imperative command if it popped up with the little thing I'm okay with them being annoyed There's also the topic that we mentioned the cube preferences That's one of the solution that we're still waiting to there's a lot of things that we would like to add But the fact that we cannot turn them on on and off by default or have a little bit of Allow users to have a little bit more flexibility around interacting with cube cuddle stops us from and prevents us from picking up those additional things until we have those The those basic building blocks because with that that will actually open up Multiple different venues. There was a PR that I'm Marley brought to my attention earlier today Where someone was introducing colors yet again? And I was like no, we're we're not gonna do colors until we have preferences especially that the implementation the proposed implementation was very Manual going through all the files adding colors everywhere. That's not the correct approach That's not what we want to maintain in the long run and especially that this was this was done not the first time We want to prefer we would prefer having some Global solution that would be more maintainable and then most importantly would be easy to turn on and off Depending on whatever users and these are there's also a different topic Which we've been playing with a couple of times, but we need people who are interested in helping with that is either some kind of documentation or a tutorial for users are Especially newcomers to the system how to help them with grasping the ideas behind Kubernetes Because a lot of time you are faced with oh Okay, keep color run and it will create a pot that's fine, but I want to have some more sophisticated tooling or a little bit how to quickly get from Oh, I know how to run a pot too. Oh, I know how to get around because there are deployments There are those additional things We do miss and we are fully aware that we miss some kind of like a walkthrough like a teaching guide something that we Either would be a plug-in or we would be even willing to put into the cube where Maybe some kind of like a wizard that would step you through. Oh, this is how you create your first spot This is how you create your first First deployment and so forth But then again, we would again have to have an ability to configure this and we go back to the point where we need to Have those preferences file to be able to to figure out what your level is and Turn it on or off because the moment we would introduce something like that we get like Thousands of PRs and complaints with folks that just turn it off turn it off turn it off Yeah, how many things how many CI systems just run in kubectl and then Scrape the output to the command line like and they upgrade the version of kubectl without looking at the Notes one of the interesting things that I just pulled up is this is actually the go type for what the cube config actually is This has no versioning it is a v1 And it's kind of been solidified forever And it's not actually a proper type inside of Kubernetes like you can't explain this you can't find any API Documentation on this it does have a preference field and the only preference that exists in that is colors Which it it has existed, but because this this type it doesn't a real type and it's not versioned We can't just add preferences to this. It also breaks backwards compatibility with everyone else. It's cute It's been a hot mess So yeah, hopefully the cube RC type work that we've been doing lets us do that But I'm with you. I'd love to just piss everyone off until they opt out of warnings, but thank you Hi, my name is Rohan and I was wondering if they're currently any efforts to give cube cuttle copy More functionality like the way our sink has because we have a use case Where we're continually copying code over to a pod and the way we do it right now is we you can't overwrite so then we have to delete the code on the pod and then Cube cuttle copy the code over and so You know and then the other way to do it would be to actually use our thing by setting up an SSH jump pod Which we don't necessarily want to do and so I was wondering for a cube cuttle copy if there's gonna be any efforts to bring it Closer to our yeah, I got this one. There's a great question the There's so much history with cube control copy. There's been like what three CVEs There's been a couple CVEs behind it And it's just that the reality of like how Linux file systems work So it used to be a lot more feature rich and we pulled things out that every time CV So now it is as dumb and simple as possible. We have been talking about ways to do this. We've actually been talking with SIG node in the the CRI team. We'd love to move all this functionality into the container runtime So copying files and moving files should be a proper control of runtime Container runtime API and then it wouldn't be you have to have a tar binary in your your Container to actually use this in the first place You know anything that Yeah, basically We need something to work on that We need the people down first to finish. Yeah, I was the person that was pulled in into all of those CVs And We made a very tough choice. There was a at some point in time When we were solving all those CVs and they literally piled one on the another over As panel for six months, maybe a little bit longer where I spend multiple nights trying to figure it out That's how we slowly Roll down several additional options from copy There was a point in time shortly afterwards that we were actually considering dropping the copy entirely because we were We were afraid of additional problems That's when we figure out that it's better to have a simple version of copy still But allow people to eventually bypass that by using kubectl exec because theoretically Which you can do these days is use kubectl exec and pipe a tar on both ends of the stream and this way Transfer multiple files a Little plaque from the OpenShift side of things OpenShift has its own OC client that is built on top of a kubectl There exists a rsync command with an OC Which basically Opens the stream for you and does like an rsync We might eventually at some point in time consider bringing something like that into kubectl But up until now there hasn't been any particular requests and also from if I remember correctly the rsync functionality Realized on what is actually the rsync binary has to be in the container, which is yet another Additional requirement that we would have to put We've decided to do it on the OpenShift side of things. I Wasn't convinced that we want to do it in in kubectl yet Okay, so given the current nature of how kubectl copy works Then would you agree that using you know rsync with an ssh jump pod would be the best way to go Probably so yeah At least that's one of the options Thank you. Thank you Hello, I have a question about the multi cluster Do you have any plan to support that multi cluster through the kubectl? For instance, let's say, I mean like we are using multi cluster to deploy the app Spread to the world for instance e us to one and us to four in GCP and We want to risk it up no matter where it is So I want to know do you have any plan or if you don't have in do you have any like us like a Recommended or prepared way to risk it up or check that kubectl So there are two different things that we have to look at When looking at multi cluster the support for multi clusters in kubectl These days is Not great. I'm fully aware because you basically end up jumping between various contexts. I've been talking with our internal SRE teams Who who are? Who challenged me how we could try to improve the situation when dealing with regular commands in the multi cluster environment So this is a real problem that we are noticing especially when as people are Are having more and more cluster that are they are working with on a daily basis? the other side of the thing is the actual multi cluster coming from sig multi cluster because I haven't seen anyone within this multi cluster sig coming up with any particular needs for Having additional commands to support their use cases So there are two ends that we would have to look at If there is particular functionality that you would want to see in the multi cluster around That would be probably a question for sig multi cluster. I I don't recall Them coming over to us with they want to extend kubectl with some additionals and then there's a question of how we can make kubectl natively Be better when discussed when talking to multiple clusters The preference file is one of the possibilities But I wouldn't want to restrict us with Solving that with just the preferences. That's actually something that I would like to do separately Yeah, I was talking with somebody about that earlier today using the qbrc to manage it, but It's putting a lot in one place. I feel like Yeah, thank you Yeah, I mean for instance we like our developer wants to see like how many Pot across the cluster or between the multi cluster So we made a simple dashboard for it for like for instance like Like a fetching all of the data from each clusters and showing it out through the web GUI or some CLI So I that's why I was curious if you have any like a place But yeah, I think it makes sense to ask the the multi-cluster secret because probably they will Deal with this topic pretty soon or I hope so so I will talk with them Thank you. Yeah, and that in that realm That's definitely the best thing to to talk about having some kind of like high-level Commands where you can with a single command get a high level overview of all of the clusters within given range Yeah, thank you. By the way, the coupe color is coupe color or coupe ctn because I most of time I I call it Coup ctn, but it's actually cub ectl cub ectl Yeah We that's our official answer. Yes cub ectl It depends I personally use both interchangeably if we want to I don't know disrupt someone's we will play with those names a little bit more but If you follow the logo, it's cubie cuddle. It is a cuddle fish the logo I don't know if I'll pull it up for people. Okay, good. I I can come this my colleague to call it this Thank you. Yeah, and then the the use case that you described those are the things that we need to hear Yeah, so like how you want to use it like what's missing? File an issue a feature request or send an email to the mailing list like those are the things that we I don't think any of us really work with multi-cluster that often either So we have no idea what we would need to have a good experience there So barely work with one cluster There you go. That's the logo So you asked earlier about why people might still use cub ectl create instead of cub ectl apply and the one thing that some folks might and I totally agree is like Templating yaml to then apply to the cluster. It is by far the best way to get working up to date current spec yaml It's better than going to the API spec because oh my god and then the docs because sometimes they're great Sometimes they're not depends on the API you're working with so just if you don't want to have create anymore I'm cool that but then we need like cub ectl Template or something like that because it's a huge thing for learners for new people and even experienced people Hell they have you do it on the cka exam so It's actually a topic that Came up multiple times the explain is one thing but it doesn't solve the so the problem because it provides with descriptions I would like to see a combination of The output that is coming from cuba from cub ectl explain with some kind of a template mechanism or something that will provide me with a Simplest of version of the resource that I can start with with the explanations And then once I start it I at least know but that goes back to one of the previous questions where we were talking about the wizards In those topics around how we can help people Get started. There wasn't actually an interesting Tweet earlier today from one of the cube core maintainers Where he was alluding that we are solving Cube or cuba eyes in the wrong way because we are focusing a lot on on Expressing or forcing users to know what they want Whereas for example, if you're looking at a regular file manager That one gives you every file. It doesn't tell you to choose. Oh, I want to see just images or just Videos it will give you everything by default and only afterwards it allows you to to filter by type I Was thinking how we could do something similar in kubectl, for example, maybe not necessarily because I'm fully aware that this kind of Command might be heavy on the API server if we start Trying to pull everything But having some kind of Helper or a starter page that such that you are Guided at least initially because once you figure it out a lot of folks after a Couple of first hours, they will know that. Oh, yeah I want to see this and that's all and they will be reaching only for the specifics But the initial steps when you're learning when you're a newcomer Maybe even for exam or just want to look around what's in the cluster You would love I would personally love to see like a tree representation Which will show me that oh there's a deployment and actually that deployment actually rolls down to this particular Replica set which rolls down to those pots Underneath in a some kind of a tree structure by the service. Yeah, I I wonder if there was I need to look it up I I did not get a time. I think there is a kubectl tree plug-in I don't recall how it's working and maybe something like that and having something like that by default Would be an option one of the possibilities that we would apply. I'm I'm going through a couple of different options At least this week so far trying to think about this But yeah, this this is a real problem because for newcomers It's hard. I'm I'm fully aware and With every single release It's actually not getting easier, but it is getting harder because there's more and more stuff that are being added we've experimented with a kubectl generate command and plug-in and Like we can do stuff like hook into the open API and add a Open API has an examples field built into it And so then we could go to the core types or CRD authors could add example fields the problem with that. It's it's a very that's a Very big task. It's a very big project. We're gonna cross lots of different parts of the project And we'd I'd love to see it for sure So let's we gonna need people to sponsor that work or come to do it. Thank you great great idea Hi there, my name is Brandon plus one on the kubectl template idea or generate that that would be awesome Yeah, main thing is sad to hear that it sounds like there's one maintainer on customize that's Explains a lot about the project. I'll have to check the roadmap and see how that's going and how I might be able to help But on that and the KRM functions versus the built-ins My whole get-ups Workflow relies very heavily on the helm generator Um, so I would like some kind of decision one where the other about what is happening with the helm generator plug-in Because the limbo that it's in right now is giving me a lot of stress Mainly that it doesn't support private registries So we have an artifact tree that we push all of our private helm charts to I'd like to use that as a source For the helm generator in order to work around that I have to write a bunch of janky scripts to authenticate and pull them to a known location and Do some things to work around it So anything that would actually be in the tool would be great And of course, you know, I'd be happy to collaborate on it once I know what direction the project's going in and where to Contribute so yeah that I don't know that's a question or statement or what but do do with it We will we will make sure to pass that information over to Natasha. Okay. Thank you It's super insightful and yeah, it's we just we need people to show up and help out So I love the offer for sure join us on slack join the the mailing list meetings. I am an enthusiast I have my own org customize everything where I have some crappy github actions that I used to do rendered yaml manifests and promotions So yeah cargo is gonna replace that I hope but but heard we need a clear It cleared either this or that or yeah, yeah, just choose something and I will I will be there Anyway, thank you. Thank you We have we're over an hour. We do last one. Hopefully don't kick us off the stage Yeah, I think might'll be quick I managed a platform engineering team at a pretty small end user organization we have about a hundred developers and I'm pretty ignorant of the inner workings of the CLI but the talk around kube preferences is something I'm really interested in and In the nature of like setting up guardrails and being able to opt-in to warnings or things like that And I'm curious if you have thought about use cases where like an organization or a team wants to have like a shared Preferences file like I want my team to have warnings opted into or have certain behaviors enabled and maybe they override it Maybe I don't want them to is that something that's that's being discussed at all. I have not thought about that I will think about that more It'll probably be after I actually get the thing to work to begin with with local files But I think that's that's an interesting idea But for for now, it'll just probably it for for 1.0 It'll just probably end up having to be like you have to distribute the file like I don't know if you like Imaging laptops for your developers or if they said there's a their own up or anything like that We've got a bootstrap script that I mean A lot will depend on how you are providing the machines for your users because if assuming you would be Providing them with machines that they don't have access to the top level directories we could for example implement it in a way that the regular Unix system works where you have at CD directories within which you have top level and system-wide User preferences that the user cannot modify because they are system-wide and then each user has the ability to customize their own Sets within their own local directory. They're providing that levels we could Potentially address the use case that you might have But I think like like Martin mentioned we need the the first thing and then we can probably extend with the additional steps But yeah, that's that's very valuable feedback. Thank you very much. Yeah, thank you a lot It's I think it's kept 3104 What is it? Yeah 3104 104 if you want to find the issue and drop ideas thoughts feature quest kept kept stored in a kubernetes slash enhancements repo Well, thank you all for joining