 Okay, folks good morning. Good evening. Good afternoon depending on what you are welcome everyone on site. Welcome everyone on Virtual my name is matcha and I'm joined today here Picatrina and Eddie who are the coaches of six CLI We're missing one person more who's our dear friend and coacher for six CLI which is Sean Hi Sean if you're watching this today, although it might be a little bit late for you It's like middle of the night and in California Anyhow, what do we do? So there are two primary things that we are trying to to work on which is adding additional tools and libraries to Discuss with to talk with your Kubernetes clusters and we're basically trying to make them awesome Both for users and for the developers that are building on top of it if that's too vague That means we are building kubectl customized. I don't think I didn't I need to Introduce these two. We are also working on crew and crew index, which is a the tool for plugins with for kubectl cool you which is a graphical user interface or a wrapper around the kubectl a bunch of libraries that you can all Use and build your awesome tools even better than kubectl is and a recent addition with which is a KRM functions Which is basically a customized plugin mechanism that Katrina will be talking about in a bit You can find us all in 60 li Kubernetes slack channel as well as in a mailing list And we also meet every week on Wednesdays. There are times depending on where which times when you're in For all Europeans since we're in Europe. It's 6 p.m. Every Wednesday and depending on the week it will be either 60 li meeting or Customize or kubectl a box crop so you can volunteer or work on specific bugs What are the main things Kubernetes enhancements proposal that we worked on in the past couple releases? That will be kubectl debug, which is still in beta That helps you work with or debug the problems that you might have with your applications or containers It would heavily relies on the formal containers There's another proposal that we put together just recently about exit code normalization We're help we're hoping to slowly start implementing the features if you have some ideas about the exit code normalization for CI usage and so forth We're more than happy to hear from you Nikita and and her group helped with adding the sub resources support to kubectl So she worked both on the server side changes and the kubectl And like I mentioned a minute ago Katina will be talking a little bit more about the the new customized plugin mechanism We all we also want to use this This meeting today to call out several people who helped us Push certain major changes to kubectl Firstly Katrina who exposed customized directly in the kubectl version That's very useful because a lot of people were asking frequently which version of customized is embedded Whether you are directly using kubectl or you are building on top of kubectl and vendoring that into your own tools So thank you very much for exposing that information Big shout out goes to Lau who was very patient with mine eddies and couple other folks reviews and Making the kubectl help commands much more readable previously it was packed with tons of information currently It's much more pleasant to an eye Another amazing shout out goes to mark who appeared out of nowhere and Improved or revamped almost all of our Completions and not only that he also improved the kubectl completions and he Devoted a lot of his own time and their energy into pushing the changes also into cobra So if your tool is using cobra directly you can also benefit and feel free to shout out to mark For his awesome work manage Kumar who had numerous conversation with me about the discovery cache that we have in kubectl Which previously was being populated every 10 minutes we bumped that limit over to six hours So that should limit a little bit The amount of data that we're pulling from the clusters the additional two is Jordan along with the sigoth are Revamping how we're using secret secrets and tokens. They're trying to limit the amount of secret tokens that we're using and So they introduce a new API for requesting tokens and expose that under the kubectl create command and generally Added a defaulting to the first container in the kubectl logs and and the other commands which are picking the container If a default annotation isn't set So if you don't know about the default annotation if your part has several containers in it You can annotate if I remember correctly, that's kubectl Kubernetes that I owe slash default annotate a default container So all the kubectl logs exec and similar commands can actually pick the default container so you don't have to Specify it in your command lines, which hopefully should help you with all of the work I'm gonna path over to Katrina to talk about the KRM Thanks much. I So another thing that's new in the SIG since our last update is that we have a new sub project It's called KRM functions. So we thought we'd give you a bit of an intro to this sub project and what it's all about The KRM functions sub project is based on a specification called the KRM functions specification It's actually a very long document that you can go and read up on at the link below It's inside the customized repository, but the most important part of that is the sentence here It's a standard for client-side functions that operate on Kubernetes declarative configurations That's pretty dense. So let's dig into what that means a little bit So first of all the KRM part that stands for Kubernetes resource model. That means what we're talking about here are declarative objects that take that standard Kubernetes resource format And the functions part is executable programs specifically ones that are built to be small interoperable and language-independent All of that the and these functions specifically they're they're there to operate on these kubernetes resource model objects That might sound a little bit like a controller, but the third part Is is the key distinction here? These are built for use on the client side not on the server side Specifically they're meant to be chained together as part of a configuration management pipeline. So in tools like customize and in fact This is actually a way to build customized Extensions and also extensions for another tool called kept in the configuration management space It's an open standard, which is really cool So we can build these functions in a way that they'll interoperable they'll be interoperable across these client-side tools Let's take a look at what this actually might look like in practice As I mentioned the heart of the specification is around using krem style objects on the client side So we're using them not just because that's what we're manipulating because what we're goal here in configuration management Is to reduce the set of configuration that we're going to deploy Not only that we're actually using these declarative specifications to describe the operation that we're going to do with our function So in this case, we're going to do something super simple We have our value annotator kind and it's just going to inject this very important data as an annotation on all of our resources So this is our specification here telling us what our function needs to do But on its own this wouldn't be enough for our function to do its work because what is it going to operate on right? We need the list of resources. We're on the client side. So that actually needs to be given to us So what the specification says is that you actually use a kind called resource list, which is a standard list kind That is going to contain not only that function configuration telling us what to do But also that input list of kubernetes objects that we're supposed to operate on So our function is going to take this resource list as its input It's going to do whatever the function tells it to do And then it's actually going to use the resource list kind as its output as well And that comes back to the core principle of making these things chainable So you have your output items and you also have a field where you can put structured results There's one of the use cases for kubernetes functions is validation So you might need a structured way to describe the results of a validation operation and that's what that field is there for So in practice, not only do you have the function config as your input you have this entire resource list So in our example here, we're going to operate on just a very simple list of one item It's one config map and we have our same function config that we saw before telling us to add that important data as an annotation So we're going to take that as our input. We're going to apply the transformation to it We're going to emit another resource list with the transformed config map with our important data right in that annotation as our output It's a very simple example of how a krem function can work What this looks like in practice It's customized and kept are the two tools that are driving the function specification So we're going to take a quick look at them In customize specifically we have on the Right hand side there a customization dot yaml that is referencing annotator dot yaml Which we have on the left is that same specification of what the function is supposed to do You can here we have it referenced in the transformers field As I alluded to a moment ago, you can also build validators this way So it works in the validators field and it also works in the generators field if you are creating resources So customize if in your customization dot yaml, there's the three places where you can use it For customize, however, you might notice there's an additional stanza here in the annotations of your configuration object You have this config dot kubernetes.io slash function annotation that refers to container and that's because in customize We don't know where the code you wrote is you have to tell us So, uh, this is the function annotation that we're using to do that and here we're pointing to a docker image This is not super elegant, especially because the folks writing the functions probably aren't the same ones that are consuming it So we want to make this a lot nicer and we have some big plans around that area I don't have time to get into all the details here, but there are three separate caps around this uh around this plan for making customize Extensions great using the karem function specification. The top level one is this uh plug-in graduation That's sort of the umbrella cap that describes the overall arching plan and the other two go into details about Specifics that have to do with making that plan successful. So if you're interested in customize extensions and karem functions As they relate to that check out these caps And then uh since kept is the other tool that supports the specification right now I thought I'd also uh just show off what it looks here very similar again. You have your Configuration object and you reference it in in this case a cap file Uh, and you're also pointing to the image that does the implementation. It's very similar In the great thing about having an interoperable focused standard Is that not just these two tools but any other tool that is operating on the client side and is in charge of manipulating some kubernetes Configuration very common task. We all need to do uh can adhere to the specification and then we can write tools that we can then share Uh karem function specifically that these tools can share uh to implement common needed logic so The sub project of sigcli the karem function sub project was actually created to host the Code for the new karem functions registry. This was also proposed in a cap The idea here is that we're going to have a sigcli sponsored registry of karem functions that are useful to everybody And that can be shared amongst the the tool two tools for starters customized and kept but also any other tools that adopt the standard So a good example is that currently customized and kept both have independent independent implementations of a helm function And we are moving uh those together into the functions registry to make one much better one that they can both use So this is a very new initiative. We're still getting this spun up So that also means that it's a great avenue for getting involved in the sig If you're interested in the standard if you're interested in the functions If you have a configuration management tool that you think could benefit from Operating this way come talk to us. We would love to hear from you And on that note, we have a bi-weekly meeting. It is that these times listed here Which you can also find in the official sigcli meetings calendar With that I will hand it off to eddie So I have a couple more slides and then we have plenty of time for some discussion and questions I got to stand on the soapbox for a minute though and we have to talk about declarative versus imperative workloads This is something that comes up quite often and we just wanted to give the community a reminder about it So i'm going to talk about three different management techniques There's a note that these management techniques should not be mixed You should only work with resources that were created and operated on by one technique. So just keep that in mind If you do mix them the result is undefined behavior. So you will probably break things So the first technique here is is when you're operating with just imperative commands. So this is your create replace And so this is where you are manually on the command line interface doing a create deployment dash dash image and giving it an image You are operating here on live objects and this is great to use when you are developing We do not recommend that you use these commands at all when you are working with production resources So great for developing don't use in production This obviously has the lowest learning curve, which is why most people jump to this But it's really important to learn some of these other techniques The second one is Imperative object configuration So this is where you are writing your kubernetes resource yaml out to a file And then you are doing create dash f and replace dash f This is great to use because you are Replacing your resource in the cluster completely right so it blows it over it paves over it But this is still actually an imperative operation And it will potentially leave you in a a mixed state depending on what the resource was before It doesn't take into account anything that was there before if you had gone in and you had done a set image This will replace it if you had set an autoscale manually if you don't cube control scale This will overwrite it right so this is a complete replace again. You don't want to mix these commands And the last one is the one we actually recommend you use and really want you to use which is full declarative object configuration So this is using cube control apply quick show of hands. Who uses apply? Great, that's awesome. Keep using apply. So this is where you have your file written to disk You are using cube control apply and it does a three-way merge with an annotation That gets written to your object state in the cluster and so this is the last known apply configuration And i'll talk about that in a second. So declarative commands. There's only one it is cube control apply Please use cube control apply It is not good to mix these commands obviously, but this is what we want people using This is the best way to manage your kubernetes resources You would run this inside of your get ops pipeline right your state lives in your file on disk There is never a question on where the source of truth is it will always match the File on the disk and the file in your cluster and again if you make imperative operations on that resource It will blow away that annotation. So your three-way merge will fail These are imperative commands. Who here uses these in production? Couple folks. Yeah, so these are not ones that you want to use in production Did folks know that cube control rollout undo is a imperative command? Yeah, so this is a command where if you made a mistake who you realized during a rollout that Maybe you had the wrong image configured or production is coming down You would use rollout undo this definitely breaks things because this will blow away that last known apply configuration We've actually had an issue that had a ton of community back and forth Because it is a very convenient command, but it's not one you should be using you should be Redoing your apply your your your undo should be a get revert and then reapply in your pipeline Right. So again, these are the imperative commands great for development. Uh, please don't use them in production Again, so this is the annotation I was talking about so mixing cube control apply with the imperative object configuration commands Create and replace is not supported This is because create and replace do not retain the last applied configuration annotation That cube control uses to compute updates. Please use apply Cool, so just some items top of mind before we get into our discussion We have been in the progress of refactoring our cube control commands Does anyone right now depend on cube control code in their applications? Any folks? Yeah, so there's currently not a guarantee that we won't break things with cube control commands We provide no sla. We want people to use them and give us feedback But again, we may need to refactor a command and it might break We want to get to the point where we consider the cube control library code stable that other people can consume So that's what this big refactor has been to move towards a kind of interface style that we want for a command And once that's done, we will definitely call it stable and let folks know that they can start consuming it The next thing we're thinking about is a better way to handle flags So structured data like kubernetes resources is not made to be represented on terminal flags If you think about a full deployment resource and you think about all the flags that you would need to specify to do a create with that It is awful. It is terrible for maintainer overhead And it's just not represent it's not meant to be represented that day So we are trying to think through a way to make this better for folks This really comes from a place of where we have some flags for some create commands But not all the full fields in it and people always Open an issue the community opens a pr to add a new flag And we've opted to not merge these right now just because we really don't want to bloat the flag space of the commands So please don't take that personally. Don't take offense to it. We really just need a better way to do this overall So if you have ideas, please let us know We also want to work on a cube control generate type command where you can scaffold out a template So a lot of folks would do like a a dry run Create and then write that out to a file on disk. This is what they recommend you doing if you're studying for the the cka Whatever it's called We want to find a way to take open api Example data and help use that to create Templates and scaffolded out files on the disk And so this is great for crds because crd authors can specify exactly what they want via the open api spec So if anyone's interested in working on that, please let us know We are as a project focusing on project reliability. We have a lot of things that aren't tested or tested poorly We will not be accepting prs that don't have tests And in fact, if if you are making a pr to a command that doesn't have tests We will push back and ask you to add tests for the command This is kind of just a stance we've taken as a project to you know Not think in the future these tests will get added but we will need them added at time of pr So there's a reliability bar kept that's open and then an open letter from a bunch of us maintainers So again, this is not personal to you. This is not offensive We just need to Stabilize the core as a product and make it more sustainable for everyone in the long run. So if you get pushed back Please don't take offense Lastly, we have a customized documentation revamp. So shout out to the replicated folks. Go check out their booth They donated customize.io and cube control.io to us as a project. So we plan to redo all of the 6cli documentation Customized is definitely in the need of the biggest revamp. So that's where our efforts are going to be first So if you would like to participate in that, please let us know And then new contributors who here has contributed to any 6cli code before Can I give them a round of applause? Yeah Thank you So as a project, we are in dire need for new contributors Folks want to step down for maintainer and lead positions, but don't actually have people to hand off to So if you are interested in contributing Please join us at any of our SIG meetings any of the other SIG meetings You can find them all in the kubernetes community repo talk to your managers talk to your vps See if they'll support you 20 percent time 50 percent time whatever you can ask for Please let us know especially if your company sells products or services in the kubernetes space So we really need some new contributors. We are absolutely open to mentor and bring folks on as maintainers We have mentoring cohorts that are starting The only thing is we need a commitment that folks aren't going to disappear after one or two prs That's the quickest way to lead to maintainer burnout is where we're constantly training new people and then they disappear So please join us and please make a firm commitment to help us out So with that we got a bunch of time for some questions in q&a and really we want to hear from you We want to hear the things that you're struggling with the things that you're working on and what you think we should be working on so We'll sit in these uncomfortable chairs I uh, this is a very important question. Is it kub cuttle or kub control? The release notes of one nine say something about that. I think much. I would I always forget what the verdict was though The verdict is whatever you prefer But no, I honestly Both are fine. Although if you start looking at our logo, which is cuttlefish Um, yeah, you brought it up. So if you look at cuttlefish, yeah, it'll be towards kub cuttle Purposefully say it differently every time I said it. I don't know if anyone noticed but I do it to trip you all up Hello, um, hi, my name is amrita. I have a question. So, um Shortly I had experience working with kubernetes and kub cuttle and debugging cluster ports. So, um, the problem which I faced is Most of the times when the port deployments fail. It was really hard for normal or not very Deep kubernetes users to debug the logs because and the logs and the visibility of the logs or to catch the particular error Where it has failed where the error message is it's hard to so so my question is is there any um, uh future step or uh enhancement in In minds to enhance the logs debugging from that prospect Thank you Okay, I'll take that question as I'll put half of my uh six cli and half of my sig apps, uh chair on Because a lot of the logs and the information that is provided by the cluster relies heavily on the authors of the controllers um So we try to and and and actually we have an ongoing um cap in progress where we're trying to unify The statuses of all the controllers so that um, first of all you could build tools on top of it The problem that we currently have is that each and every single controller was written by a different person at a different point in time And That led to the problem that every control every author just figure out Oh, I want to know about this state and that state, but there's no continuity And it's hard to follow what's going on with everything every separate controller. We're trying to somehow unify this This is both for uh, if you're trying to build tools on top of it But also for new people debugging if you're for example, if a newcomer and you start working with deployments You'll learn how to work with deployments But then if you switch over to daemon sets or stateful sets It's hard because the statuses and the information provided by those controllers are completely different That's what we're trying to fix and this will help both users because oh, I work with deployments And it should be natural that I more or less know At least the general idea where the the controller is at or what it is struggling with rather than uh Learning anew entirely what the controller works Of course, it won't solve all the problems. There will be always some intricacies of every single controller. We can't Share everything, but we're trying to improve that situation on the cli front We are pushing towards the kubectl events command which exposes that information And on the controller side, we're hoping through those events and statuses and the in the controllers To somehow improve that situation. Also, there's a kubectl debug command That lee and a couple of folks is helping him. I was just recently Reviewing pr and we're hoping to push this forward as soon as fmro containers were ga We're hoping to ga debug. So I'm I'm pretty happy And looking forward to it So there there is a question here in the virtual platform If there are some tips for newcomers on how to get involved So we that refactor I mentioned We are zeroing in on what we want a kubectl command to look like again. So it can be used by you and your library code That's probably going to be the best on ram is once we stabilize and agree on an interface We have a ton of kubectl commands. They're going to need to be rewritten to match the specification So we will have a big issue tracking all of that The really the best place to join us is in slack or at our meetings Introduce yourself the kubernetes community is exceptionally welcoming, especially the first time people You'll be asked to just say your name and what you want to work on at your first meeting So highly recommend you just jump in and join us But yeah, that is probably going to be the best starting point And it also depends on what your skills are and what you're interested in because we do have a lot of different opportunities available So we we did call out a few different things in in the in the talk there like the customization at the customized documentation Revamp is one that's very accessible We're basically taking one website and restructuring it into a new place and then launching it fresh And that is also like if you're not as familiar with the tool you can learn in the process and help us make it Better and clearer as we move the content over so like that's a really good easy accessible one But there's all sorts of different things that you can do like even helping us with each issue triage going through and confirming Yes, this actually is still an issue in the latest version We don't have nearly as many people on the projects as as we would like to have so We're very open to various kinds of contributions And like eddie said Pointing out to us that you are committed to learning like If you are going to be around for a while, then we can mentor you we can help you learn These areas we just need to know that that's the case And an important thing That I always repeat to everyone that i'm pinging Um I always say that if I review your pr Go find me on slack because my I already Declared bankruptcy and get hub notifications. My slack is probably the easiest but even then Uh, please do not be offended if I disappear for a week or two or a little bit longer Just keep on pinging me. I'll get back to it. Sometimes it takes me Days weeks I it There's nothing personal. I'm jumping between my professional duties at red hat and my community community duty Duties like I mentioned earlier before I'm responsible for two six For six cli and sig gaps. So I'm jumping between the topics. It's oftentimes that I'll be Um Lefting six cli behind a little bit because I have katrina and eddy and shawna round And i'm jumping on the sig gaps and trying to push that boat forward Uh, and I know that the same applies to katrina to eddy and all the people. There's only a handful of us Um, and like we said, we need more people. We're more than happy to help Uh on ramp the people it just takes some time and a little bit more patience Yeah, exactly. Just be patient with us. We're not we're not ignoring you on purpose. We will get to eventually we really do want your help And we appreciate your patience Yeah, and we don't just need developer contributions too. So if you have pms that are interested in contributing to open source We would love some good pms to stick around help us triage issues respond to issues Can you please provide us a replication example? Um, so yes pms Technical writers for documentation. You don't have to be a developer if you are a developer and you don't know where to start You can just review prs on the kubernetes Project itself, right? You don't have to be a project maintainer to review a pr We welcome and encourage anyone who gets the code to a a higher bar for us I have another question. Um, we have an arrow which we acquired multiple time which is related to um Seal secrets. So when we do the cubes kubectl apply And the secret creation failed through the controller The port cannot be started and this is a thing we cannot see when we apply this even if we have it in the In the ci pipeline are any plans to uh to integrate such as External controllers which are depending on the deployment That's a that's a cr that is if the cr doesn't exist the deployment can't start Is that what you're saying? Custom resource sorry the sealed secret is a custom resource. Yes So you have a custom resource that creates a a secret resource that is consumed by a deployment and you don't have enough visibility into the Deployments reason for failing to start which is not ultimately from the secrets from the sealed secret. That's did I understand correctly? I not that i'm aware of I don't know much Yeah, that goes back to the to what I was talking earlier earlier before about improving the status is Improving the debuggability. That's the biggest issue before hand. We focus a lot on oh yet add in more functionality adding more and more and more functionality Uh, we currently pass a threshold where a lot of people is on uh on kube is on is using certain tools And we are overwhelmed by oh, how do I debug those? How do I? Learn more about what's going on a lot of that information unfortunately is hidden way deep in the controller locks So in in your case that will be in the deployment controller locks And the only way for you to debug is is to be cluster admin or have access to the controller locks That's a shame a similar problem with have we have with the scheduler which is the third area that i'm also tightly looking into but that's because my team is also in uh in that area And we have exactly the same problem And what's even worse is that's the scheduler locks to know them You have to bump the locks of the scheduler to the crazy 10 or 12th level which basically means you're getting incredible Almost unparsable amount of data that you have to go through And what's more is you're getting that information from all the pods So tracking that one particular pod that you cared about is very hard So there is a lot of work around the structured logging on the server side To simplify and help with debugging those and then on top of that We will be slowly exposing uh as much as of the information as we can Uh over to to the users who don't have that much of Access to the cluster. That's the problem. And I'm fully aware of it. I see how my customers are struggling with this From pretty much cli all the way back to scheduling same applies for On the api side of things which is another area of my interest If you have some admission web hooks or any kind of admission plugins That are part of the process of applying your resources They are that information is again lost in the in the api server locks and you don't have that information Or because uh a very frequent case that I I'm struggling and we're trying to expose that information Uh garbage collection will stop working So you can't remove your resources because there is an admission misbehaving Because garbage collection relies on being able to read all the resources in the cluster and know that oh, yes I can safely remove it or not And if it can't read all the resources It will just won't remove anything and you'll have pots and terminating forever Until the problem with garbage collection is solved which boils down back to the api server and the admission which is Preventing the the full discovery from succeeding Quota has the exact same problem as well. And yeah, so It's it's in the works. It'll take some time Yes, I'm more than happy to mentor folks for both the Sixie lies the gaps It just takes some time I hope that answers at least some of your question. Thank you very much all