 Welcome everyone. Good afternoon. Or as I would normally say, good morning, good afternoon, good evening, depending on where you are. But since we're all from or in one time zone, I can just go with good afternoon. My name is Maciej, and I'll be talking to you about Kupkatl and OC. Specifically about 6CLI, and well, my team a little bit because my team owns OC. But before I do, let me show of hands, how many of you have ever heard about Kupkatl? Ah, cool. How many of you have heard about OC? And the same about usage, Kupkatl and OC? Great. Okay, how many of you have problems with pronunciation of Kupkatl? There are three or four, and possibilities are Kupkatl, Kupcontrol, Kupctl. And I'm pretty sure there are a couple more. So even though we probably don't have a official way of saying Kupkatl, the logo that we have is suggesting it should be Kupkatl. Why? Because our logo is a Kupkatl fish, but we'll get to that one in a bit. So why am I speaking to you about this in the first place? So I'm for 50% of tech lead and 50% of a chair lead for 6CLI, which is taking after Kupkatl. And I'm 100% the leader for the for the workloads team that is responsible for OC. So I think I have a quorum to say a majority of those things. So I've mentioned that the greeting that I mentioned before is the one that I'm always using during our bi-weekly meetings for 6CLI. Those are happening every other Wednesday, and the other Wednesday will be the next Wednesday and then every other two weeks. You can find us, and there's a time, you can find us on 6CLI Kubernetes Slack, and we also have a very low profile mailing list, so if you're interested. I've mentioned you the logo, so if you haven't seen the logo, and I would like to see a show of hand who has not seen the bottom logo. That's bad. That means you're not following me on Twitter because literally, I would even say over almost a year ago, I was tweeting that we got a logo. But that also means you are not showing up for 6CLIs, which are cool, we are very friendly, and we do really cool stuff. But honestly, yeah, that's basically how our logo looks like, and all of the credit goes to Ashley McNamara. We asked her over a year ago to create a logo for us. We only had a rough idea that we want to have a cube logo and a cuttlefish next to it, and she returned to us with such an amazing graphic. So thank you very much, Ashley, if you get a chance to look at it. This is a replay kind of of the talk that myself, Phil, who's the second tech lead for 6CLI, and Sean, who's the second chair for 6CLI that we did in San Diego in November for KubeCon. Back then, we were given 90 minutes. Currently, I only have 21 left, which is not enough. Back then, we decided that we want to have a conversation with the audience, and we literally asked people to ask us about anything. It can be as simple as, well, I have a box open or I have a PR. Can you help me review that one? All the way up to what is the future of 6CLI, where you are planning to be in a couple of months, a couple of years. So before I go to the rest of the presentation, my question is, does anyone have any questions? It can be a PR that you want to have reviewed, but we will review it afterwards, but I'll note it out. Or anything related to either KubeCuttle or OC. If you don't have questions, and trust me, I would say there's five times more of you than there were people back then in San Diego. I'm pretty sure that if they were able to come up with questions for 90 minutes, you can come up with some questions for me right now. If not, I can go through the slides that I have presented. I'll talk a little bit about where we are, what we've done in the past, what we are planning in the future, how you can contribute to KubeCuttle or OC. Are there any questions as of now, or should I proceed with the presentation? Okay. Great. That's a valid option as well. So the first question that came back to ask back then in San Diego, and I'm pretty sure that a lot of people will find it confusing still today, is Kubernetes repo and KubeCuttle repo. We're in the ongoing process of splitting KubeCuttle out of the main repository. So in all honesty, if you have a PR fix or anything regarding KubeControl, you are still submitting that against the main Kubernetes repo. The KubeCuttle repo is for issues, and it's a mirror that you can consume if you're planning to build something on top of KubeControl like what we are doing in OpenShift with OC. We are building additional stuff, OpenShift specific elements on OC. So actually, that's the correct directory. The crucial name that appears there is the staging. Staging is a directory, and if you look, if you've ever got a chance to look through Kubernetes repository, staging is a directory, it's a temporary directory that is meant to separate the internals of Kubernetes, and the repositories that are planned to be extracted from the main repo. This ensures that we have clean dependencies, and there are no internal dependencies that will break if we want to get this thing out. Unfortunately, due to the fact that KubeControl was initially wrote in Kubernetes, there's still a few dependencies. Unfortunately, those few require a significant amount of time to get them out of the main repo, so I'm planning that approximately next six months, we will be fully, fully out of the main repo. For OC, stuff is much easier. OC itself got out of the main repo. It used to live in OpenShift origin, but it's not anymore. We put a lot of effort in the spring last year, and before summer, we were actually in a separate repository. Unfortunately, due to the fact that OC consumes KubeControl, and KubeControl still has hard dependencies on Kubernetes, we vendor all of that. That's why it's also important for us to clean the dependencies inside of KubeControl and cut that dependency significantly. I mentioned that also before that OC is actually a consumer of KubeControl, which basically means everything that is working in KubeControl, any command available in KubeControl, is also available in OC. The other way around is not true because we have OpenShift specific items, such as working with builds, deployment conflicts, routes, image streams, and several other resources that are very OpenShift specific. Of course, with the overall goal of moving towards plugins, making everything accessible through CRDs, the difference is becoming less and less visible, and that is also a goal for my team to entirely scratch those changes. That is an important topic. I've heard lots of people asking questions. What are the compatibility guarantees? We have lots of issues where people are using newer KubeControl 117 with very old clusters because that's the policy that they have in their company, and they're complaining that something is not working. With the amount of changes that happened in the past year, slightly over a year, especially with regards to working with the extension points, such as CRDs, namely, and I'll be talking about a little bit about server-side printing. The guarantees that we have, the strong guarantees that we have, are about plus minus one currently. It will change because the majority of the code is slowly getting stabilized. Diversion changes should not be affecting. There's also one additional goal. As one of the reasons for us to split KubeControl to a separate repo is to be able to iterate faster. I'm not sure how many of you are aware, but Kubernetes is being released once a quarter, and we still think that's too slow because we would like to deliver features to our users much faster. It's not about, I mean, fixing bugs is one thing, but it's not about breaking compatibility, but it's about providing additional capabilities to users. But if we want to release faster, we need to ensure that the backwards compatibility, at least the plus minus one that Kubernetes currently is doing, will increase significantly if we will be releasing, let's say, once a month. That basically means we will have to have backwards compatibility of about plus minus four, five, six. But with the current knowledge, it should not be a big problem. Plugins. I've mentioned that several times, extensibility is a big topic for two years or so. We have the ability to split the container runtimes in Kubernetes. You can use Docker, you can use Cryo. There are options for different storage backends. You can use AWS, GCP, whatever. There are so many of them, but I have no idea about them. I know whom to ask about those, thankfully. Tim applies for API. For API, it's the CRDs, which is custom resource definitions, which allow you as users or your cluster providers to extend the capabilities, extend the resource model that the Kubernetes has with your own resources, especially when you're working with operators. That's a crucial element. The problem with that is CRDs are not necessarily supported through KubeCuttle, because KubeCuttle currently, as it is, has a lot of knowledge about the types that it is working with in the code at a compile time, and we needed to fix that. And the simplest thing that was possible was to have an extension point inside of KubeCuttle, OC, for people to write their own mechanism, write their own functionality and extend the capabilities, the built-in capabilities of KubeCuttle. This is where we introduced CLI runtime. CLI runtime is actually a building block that KubeCuttle is the first consumer of. It's a set of functions helpers for working with printing, so that you can have the same identical printing mechanism as KubeCuttle has. That basically means if you pass dash O, you can pick between JSON YAM, YT, and JSON template, whatever, whatever, there's several of them, that you can, well, you can write those on your own, but it will take some time, which is, well, obviously non-consuming. Ideally, you can just consume whatever KubeCuttle does and reuse that functionality, and that increases your user base, because some users are accustomed already to using KubeCuttle will have the same experience using your plug-in. Like I said, plug-in, printing, the configs, there is a common interface for consuming resources and working with resources, the resource builder, but I don't want to go too much into details, because that might be a little bit boring, although for me it's a little bit, it's quite fascinating, but I don't want to bore you too much. For starters, we figured out that it would be nice to write a sample plug-in, and that's the second link. It's literally the simplest possible plug-in that you can think of. If I remember correctly, it is the plug-in itself, this is how simple the plug-in is, the majority of the code is ready, and we are reusing generic CLI options from CLI Runtime. In the comment itself, it is modifying the config so that you can set a namespace inside of the config so that you don't have to every single time do KubeCuttle-n. Users of OC might be familiar with OC project for switching the project you're working on to permanently work within a particular project. KubeCuttle does not have that notion, because we wrote a simple plug-in. It's super simple. Finally, when it comes to managing plug-ins, the problem is how do you manage plug-ins? Given that you have a set of users and currently we have quite a few plug-ins, honestly. I can't remember the numbers off the top of my head, and I haven't written them down. But the number is growing significantly. So, Ahmed figured out that it would be nice to write a plug-in manager. That was an inter project in Google, and some time ago we adopted it, and it's currently one of the six CLI official sub-projects, which basically means we take care of the project and we maintain it. So, Kube is in itself a plug-in to KubeCuttle, which allows managing other plug-ins. It takes care of the problem of installing, building plug-ins, and managing. Features. I mentioned before the server-side print, what it is, server-side print. The extensibility topics, again, when we create, when you invoke KubeCuttle GET on a pod, KubeCuttle knew beforehand how the pod is structured, what kind of fields it has, how to print it. Like literally it was hard-coded in the GET command, how to print a particular resource. The problem with that approach is KubeCuttle does not know every single resource or how to print it, especially those that are created either as an additional API servers or as CRDs. Because we figure out that it would be nice for the server to tell us how to print a particular resource. So currently it's the server role or the CRD author to tell the server how to print a particular resource. KubeCuttle will get that information, will read that information and will print it accordingly. So the entire knowledge moved from the client onto the server. The additional benefit is every single consumer, whether that be KubeCuttle or someone decides to write their own plug, their own client in a different programming language or web console or something like that can reuse the same primitives for printing resources and have a consistent output. The other two interesting features that we are currently working on, the server side print isn't ongoing because there is the describe functionality is still baked into KubeCuttle and we need to figure out because the same applies, we don't know how to describe a CRD. So if you try to describe a CRD at this point in time, the information won't be too much of use. So we need to move the same functionality again to the server. The debug and events commands, which will be new commands for the OC users, the debug is pretty much familiar, you should be pretty much familiar. If you've ever used OC, OC debug is there since, I don't know, almost beginning, it allows to create an additional pod for debugging purposes. You can specify what kind of image or whether you're going to just debug a node. That's especially useful if you're working with OpenShift 4. By default, we cut off the SSH into the nodes. The OC debug node actually creates a privileged container that has full access to a particular node. With the development of FMRO containers, and if you have questions about FMRO containers, find me later. But in short, they will allow injecting a container that will be sharing process namespace, and you will have the ability to directly get into the processes of the already running pod. So that's happening currently in upstream. The FMRO containers, there are beta or something along those lines. And the person that is working on the FMRO containers figure out that it would be nice to have something like kubectl debug. I contacted him that it would be nice if it covers many more use cases such as the OC debug node, functionality, and many, many more. And we put together a proposal, and the proposal was merged last week if I'm correctly detected. And it should be, we should start the implementation. The events, the events is, it's how we gather the experience over the past couple of years. Currently, the only way you can interact with the events, aside from the OpenShift Web console, which has pretty nice interface for events, you have only kubectl get events, and that will give you all of the events or all of the events for the particular namespace. That's a problem because there's no good way for filtering and whatnot. And there's lots of requests for people who want to filter those, people who want to sort differently those, et cetera, et cetera. The problem with this is those functionalities are only for events. And kubectl get is not only for events. So we have a conflict of interest. That's why we figure out, let's create a new command. A new command that will give us much more control over the events, especially that there's a lot of work going on around standardizing events. So we want to have more control over what we can get out of those events. Okay. So aside from the crew that I mentioned before, the subproject that the 6CLI owns, there are two others that I only shortly mentioned. The crew is very interesting, and it's currently in the phase of moving off of IBM over to Kubernetes 6. It's actually a graphical user interface for kubectl. It's pretty cool. It can have a lot more functionality. It's built on the electron. Customized on the other hand, which is another subproject of ours, is very helpful when you want to work with configuration management. It's kind of like templating engine, but not quite. If you haven't got a chance, go check it out. By default, it is part of every kubectl command. There's a dash k flag hidden under kubectl commands, especially under apply, and it can read custom miles files. The future? The separate repo. That's probably the biggest issue that we're currently working on. I mentioned that we started this over two years ago. It's an ongoing work. We're almost there. The problem is the big things that just got left and this is where we are. The dynamic commands, it's an important, it's an interesting topic again. It again follows the path of extensibility, plugins, et cetera. It's basically moving the knowledge how to run specific things over to the server. This is especially important for us for the create commands. Currently, create commands literally embed the go types for particular resources, so create are only viable to work with the built-in resources, and I'm not talking about create dash f and you pass a file to create the thing, but rather about create deployment, create whatever. Same applies for set, set image, set probes and stuff like that. We want to move as much as possible part of those functionalities over to the server. Commands and headers is a way for us to gather some kind of metrics about what people are using in a plugin like that model. Confine me. Okay. Reminder where the meetings are. If you have any questions, I'll be here for a little bit. Confine me. Thank you.